Test Report: Docker_Linux 21724

                    
                      360d9e050a05bd2ed6961537be9e77a8ddcd2d56:2025-10-13:41891
                    
                

Test fail (1/347)

Order failed test Duration
259 TestSkaffold 37.44
x
+
TestSkaffold (37.44s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /tmp/skaffold.exe608739235 version
skaffold_test.go:63: skaffold version: v2.16.1
skaffold_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p skaffold-600759 --memory=3072 --driver=docker  --container-runtime=docker
skaffold_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p skaffold-600759 --memory=3072 --driver=docker  --container-runtime=docker: (23.704360403s)
skaffold_test.go:86: copying out/minikube-linux-amd64 to /home/jenkins/workspace/Docker_Linux_integration/out/minikube
skaffold_test.go:105: (dbg) Run:  /tmp/skaffold.exe608739235 run --minikube-profile skaffold-600759 --kube-context skaffold-600759 --status-check=true --port-forward=false --interactive=false
skaffold_test.go:105: (dbg) Non-zero exit: /tmp/skaffold.exe608739235 run --minikube-profile skaffold-600759 --kube-context skaffold-600759 --status-check=true --port-forward=false --interactive=false: exit status 1 (6.710427243s)

                                                
                                                
-- stdout --
	Generating tags...
	 - leeroy-web -> leeroy-web:latest
	 - leeroy-app -> leeroy-app:latest
	 - base -> base:latest
	Some taggers failed. Rerun with -vdebug for errors.
	Checking cache...
	 - leeroy-web: Not found. Building
	 - leeroy-app: Not found. Building
	 - base: Not found. Building
	Starting build...
	Found [skaffold-600759] context, using local docker daemon.
	Building [base]...
	Target platforms: [linux/amd64]
	Sending build context to Docker daemon  2.048kB
	Step 1/3 : FROM gcr.io/distroless/base
	latest: Pulling from distroless/base
	fd4aa3667332: Pulling fs layer
	bfb59b82a9b6: Pulling fs layer
	017886f7e176: Pulling fs layer
	62de241dac5f: Pulling fs layer
	2780920e5dbf: Pulling fs layer
	7c12895b777b: Pulling fs layer
	3214acf345c0: Pulling fs layer
	5664b15f108b: Pulling fs layer
	045fc1c20da8: Pulling fs layer
	4aa0ea1413d3: Pulling fs layer
	da7816fa955e: Pulling fs layer
	ddf74a63f7d8: Pulling fs layer
	e7fa9df358f0: Pulling fs layer
	d8a0d911b13e: Pulling fs layer
	5664b15f108b: Waiting
	045fc1c20da8: Waiting
	4aa0ea1413d3: Waiting
	da7816fa955e: Waiting
	ddf74a63f7d8: Waiting
	e7fa9df358f0: Waiting
	62de241dac5f: Waiting
	2780920e5dbf: Waiting
	7c12895b777b: Waiting
	3214acf345c0: Waiting
	d8a0d911b13e: Waiting
	fd4aa3667332: Verifying Checksum
	fd4aa3667332: Download complete
	bfb59b82a9b6: Verifying Checksum
	bfb59b82a9b6: Download complete
	017886f7e176: Verifying Checksum
	017886f7e176: Download complete
	7c12895b777b: Verifying Checksum
	7c12895b777b: Download complete
	2780920e5dbf: Verifying Checksum
	2780920e5dbf: Download complete
	fd4aa3667332: Pull complete
	bfb59b82a9b6: Pull complete
	62de241dac5f: Verifying Checksum
	62de241dac5f: Download complete
	5664b15f108b: Download complete
	3214acf345c0: Download complete
	017886f7e176: Pull complete
	62de241dac5f: Pull complete
	045fc1c20da8: Verifying Checksum
	045fc1c20da8: Download complete
	2780920e5dbf: Pull complete
	7c12895b777b: Pull complete
	3214acf345c0: Pull complete
	5664b15f108b: Pull complete
	045fc1c20da8: Pull complete
	4aa0ea1413d3: Verifying Checksum
	4aa0ea1413d3: Download complete
	da7816fa955e: Verifying Checksum
	da7816fa955e: Download complete
	4aa0ea1413d3: Pull complete
	da7816fa955e: Pull complete
	ddf74a63f7d8: Download complete
	ddf74a63f7d8: Pull complete
	d8a0d911b13e: Verifying Checksum
	d8a0d911b13e: Download complete
	e7fa9df358f0: Verifying Checksum
	e7fa9df358f0: Download complete
	e7fa9df358f0: Pull complete
	d8a0d911b13e: Pull complete
	Digest: sha256:9e9b50d2048db3741f86a48d939b4e4cc775f5889b3496439343301ff54cdba8
	Status: Downloaded newer image for gcr.io/distroless/base:latest
	 ---> 314086290b80
	Step 2/3 : ENV GOTRACEBACK=single
	 ---> Running in 00945de271c8
	 ---> ea52c5a41e97
	Step 3/3 : CMD ["./app"]
	 ---> Running in 0809e99c6571
	 ---> 6d137c5a8316
	Successfully built 6d137c5a8316
	Successfully tagged base:latest
	Build [base] succeeded
	Building [leeroy-app]...
	Target platforms: [linux/amd64]
	Sending build context to Docker daemon  4.096kB
	Step 1/9 : ARG BASE
	Step 2/9 : FROM golang:1.18 as builder
	Building [leeroy-web]...
	Target platforms: [linux/amd64]
	Build [leeroy-web] was canceled

                                                
                                                
-- /stdout --
** stderr ** 
	build [leeroy-app] failed: docker build failure: toomanyrequests: You have reached your pull rate limit as 'minikubebot': dckr_jti_W89jo-sMmu2ZeG4U1lTVn5LowXk=. You may increase the limit by upgrading. https://www.docker.com/increase-rate-limit. Please fix the Dockerfile and try again..

                                                
                                                
** /stderr **
skaffold_test.go:107: error running skaffold: exit status 1

                                                
                                                
-- stdout --
	Generating tags...
	 - leeroy-web -> leeroy-web:latest
	 - leeroy-app -> leeroy-app:latest
	 - base -> base:latest
	Some taggers failed. Rerun with -vdebug for errors.
	Checking cache...
	 - leeroy-web: Not found. Building
	 - leeroy-app: Not found. Building
	 - base: Not found. Building
	Starting build...
	Found [skaffold-600759] context, using local docker daemon.
	Building [base]...
	Target platforms: [linux/amd64]
	Sending build context to Docker daemon  2.048kB
	Step 1/3 : FROM gcr.io/distroless/base
	latest: Pulling from distroless/base
	fd4aa3667332: Pulling fs layer
	bfb59b82a9b6: Pulling fs layer
	017886f7e176: Pulling fs layer
	62de241dac5f: Pulling fs layer
	2780920e5dbf: Pulling fs layer
	7c12895b777b: Pulling fs layer
	3214acf345c0: Pulling fs layer
	5664b15f108b: Pulling fs layer
	045fc1c20da8: Pulling fs layer
	4aa0ea1413d3: Pulling fs layer
	da7816fa955e: Pulling fs layer
	ddf74a63f7d8: Pulling fs layer
	e7fa9df358f0: Pulling fs layer
	d8a0d911b13e: Pulling fs layer
	5664b15f108b: Waiting
	045fc1c20da8: Waiting
	4aa0ea1413d3: Waiting
	da7816fa955e: Waiting
	ddf74a63f7d8: Waiting
	e7fa9df358f0: Waiting
	62de241dac5f: Waiting
	2780920e5dbf: Waiting
	7c12895b777b: Waiting
	3214acf345c0: Waiting
	d8a0d911b13e: Waiting
	fd4aa3667332: Verifying Checksum
	fd4aa3667332: Download complete
	bfb59b82a9b6: Verifying Checksum
	bfb59b82a9b6: Download complete
	017886f7e176: Verifying Checksum
	017886f7e176: Download complete
	7c12895b777b: Verifying Checksum
	7c12895b777b: Download complete
	2780920e5dbf: Verifying Checksum
	2780920e5dbf: Download complete
	fd4aa3667332: Pull complete
	bfb59b82a9b6: Pull complete
	62de241dac5f: Verifying Checksum
	62de241dac5f: Download complete
	5664b15f108b: Download complete
	3214acf345c0: Download complete
	017886f7e176: Pull complete
	62de241dac5f: Pull complete
	045fc1c20da8: Verifying Checksum
	045fc1c20da8: Download complete
	2780920e5dbf: Pull complete
	7c12895b777b: Pull complete
	3214acf345c0: Pull complete
	5664b15f108b: Pull complete
	045fc1c20da8: Pull complete
	4aa0ea1413d3: Verifying Checksum
	4aa0ea1413d3: Download complete
	da7816fa955e: Verifying Checksum
	da7816fa955e: Download complete
	4aa0ea1413d3: Pull complete
	da7816fa955e: Pull complete
	ddf74a63f7d8: Download complete
	ddf74a63f7d8: Pull complete
	d8a0d911b13e: Verifying Checksum
	d8a0d911b13e: Download complete
	e7fa9df358f0: Verifying Checksum
	e7fa9df358f0: Download complete
	e7fa9df358f0: Pull complete
	d8a0d911b13e: Pull complete
	Digest: sha256:9e9b50d2048db3741f86a48d939b4e4cc775f5889b3496439343301ff54cdba8
	Status: Downloaded newer image for gcr.io/distroless/base:latest
	 ---> 314086290b80
	Step 2/3 : ENV GOTRACEBACK=single
	 ---> Running in 00945de271c8
	 ---> ea52c5a41e97
	Step 3/3 : CMD ["./app"]
	 ---> Running in 0809e99c6571
	 ---> 6d137c5a8316
	Successfully built 6d137c5a8316
	Successfully tagged base:latest
	Build [base] succeeded
	Building [leeroy-app]...
	Target platforms: [linux/amd64]
	Sending build context to Docker daemon  4.096kB
	Step 1/9 : ARG BASE
	Step 2/9 : FROM golang:1.18 as builder
	Building [leeroy-web]...
	Target platforms: [linux/amd64]
	Build [leeroy-web] was canceled

                                                
                                                
-- /stdout --
** stderr ** 
	build [leeroy-app] failed: docker build failure: toomanyrequests: You have reached your pull rate limit as 'minikubebot': dckr_jti_W89jo-sMmu2ZeG4U1lTVn5LowXk=. You may increase the limit by upgrading. https://www.docker.com/increase-rate-limit. Please fix the Dockerfile and try again..

                                                
                                                
** /stderr **
panic.go:636: *** TestSkaffold FAILED at 2025-10-13 14:09:06.439688958 +0000 UTC m=+2128.535391278
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestSkaffold]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestSkaffold]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect skaffold-600759
helpers_test.go:243: (dbg) docker inspect skaffold-600759:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "0bc01f8b66b5e94656af3fbfed074a68c6a484b637b9bf1ab5a9b97d4ed83699",
	        "Created": "2025-10-13T14:08:40.721574238Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1086854,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-13T14:08:40.7573264Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c6fde2176fdc734a0b1cf5396bccb3dc7d4299b26808035c9aa3b16b26946dbd",
	        "ResolvConfPath": "/var/lib/docker/containers/0bc01f8b66b5e94656af3fbfed074a68c6a484b637b9bf1ab5a9b97d4ed83699/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/0bc01f8b66b5e94656af3fbfed074a68c6a484b637b9bf1ab5a9b97d4ed83699/hostname",
	        "HostsPath": "/var/lib/docker/containers/0bc01f8b66b5e94656af3fbfed074a68c6a484b637b9bf1ab5a9b97d4ed83699/hosts",
	        "LogPath": "/var/lib/docker/containers/0bc01f8b66b5e94656af3fbfed074a68c6a484b637b9bf1ab5a9b97d4ed83699/0bc01f8b66b5e94656af3fbfed074a68c6a484b637b9bf1ab5a9b97d4ed83699-json.log",
	        "Name": "/skaffold-600759",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "skaffold-600759:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "skaffold-600759",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "0bc01f8b66b5e94656af3fbfed074a68c6a484b637b9bf1ab5a9b97d4ed83699",
	                "LowerDir": "/var/lib/docker/overlay2/a78dec82b5f68e40300f51de43b83b015797c57615e143a68b4a595a8b13e561-init/diff:/var/lib/docker/overlay2/3ca0dbfe0764e1e4674a3bf7155dad506c3286fc280b31af582a3eaa6577aea9/diff",
	                "MergedDir": "/var/lib/docker/overlay2/a78dec82b5f68e40300f51de43b83b015797c57615e143a68b4a595a8b13e561/merged",
	                "UpperDir": "/var/lib/docker/overlay2/a78dec82b5f68e40300f51de43b83b015797c57615e143a68b4a595a8b13e561/diff",
	                "WorkDir": "/var/lib/docker/overlay2/a78dec82b5f68e40300f51de43b83b015797c57615e143a68b4a595a8b13e561/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "skaffold-600759",
	                "Source": "/var/lib/docker/volumes/skaffold-600759/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "skaffold-600759",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "skaffold-600759",
	                "name.minikube.sigs.k8s.io": "skaffold-600759",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "fb2a35cc1caedc4596f028cf8245eb3458f96b371a9d9afc221b34aea9ead76a",
	            "SandboxKey": "/var/run/docker/netns/fb2a35cc1cae",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33348"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33349"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33352"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33350"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33351"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "skaffold-600759": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "46:4d:85:c3:1c:f6",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "2b0dbed557b9cd4b3986c982ffdacfe098a27c674ee8363b52b08cf72487ade3",
	                    "EndpointID": "ef59f240cc4ebb9e5d5299bdac8a2b294fc0f396c778580642ef502774a5a05e",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "skaffold-600759",
	                        "0bc01f8b66b5"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p skaffold-600759 -n skaffold-600759
helpers_test.go:252: <<< TestSkaffold FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestSkaffold]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p skaffold-600759 logs -n 25
helpers_test.go:260: TestSkaffold logs: 
-- stdout --
	
	==> Audit <==
	┌────────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────┬──────────┬─────────┬─────────────────────┬─────────────────────┐
	│  COMMAND   │                                                                            ARGS                                                                             │        PROFILE        │   USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────┼──────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start      │ -p multinode-542745-m02 --driver=docker  --container-runtime=docker                                                                                         │ multinode-542745-m02  │ jenkins  │ v1.37.0 │ 13 Oct 25 14:04 UTC │                     │
	│ start      │ -p multinode-542745-m03 --driver=docker  --container-runtime=docker                                                                                         │ multinode-542745-m03  │ jenkins  │ v1.37.0 │ 13 Oct 25 14:04 UTC │ 13 Oct 25 14:05 UTC │
	│ node       │ add -p multinode-542745                                                                                                                                     │ multinode-542745      │ jenkins  │ v1.37.0 │ 13 Oct 25 14:05 UTC │                     │
	│ delete     │ -p multinode-542745-m03                                                                                                                                     │ multinode-542745-m03  │ jenkins  │ v1.37.0 │ 13 Oct 25 14:05 UTC │ 13 Oct 25 14:05 UTC │
	│ delete     │ -p multinode-542745                                                                                                                                         │ multinode-542745      │ jenkins  │ v1.37.0 │ 13 Oct 25 14:05 UTC │ 13 Oct 25 14:05 UTC │
	│ start      │ -p test-preload-319116 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.32.0 │ test-preload-319116   │ jenkins  │ v1.37.0 │ 13 Oct 25 14:05 UTC │ 13 Oct 25 14:05 UTC │
	│ image      │ test-preload-319116 image pull gcr.io/k8s-minikube/busybox                                                                                                  │ test-preload-319116   │ jenkins  │ v1.37.0 │ 13 Oct 25 14:05 UTC │ 13 Oct 25 14:05 UTC │
	│ stop       │ -p test-preload-319116                                                                                                                                      │ test-preload-319116   │ jenkins  │ v1.37.0 │ 13 Oct 25 14:05 UTC │ 13 Oct 25 14:05 UTC │
	│ start      │ -p test-preload-319116 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=docker                                         │ test-preload-319116   │ jenkins  │ v1.37.0 │ 13 Oct 25 14:05 UTC │ 13 Oct 25 14:06 UTC │
	│ image      │ test-preload-319116 image list                                                                                                                              │ test-preload-319116   │ jenkins  │ v1.37.0 │ 13 Oct 25 14:06 UTC │ 13 Oct 25 14:06 UTC │
	│ delete     │ -p test-preload-319116                                                                                                                                      │ test-preload-319116   │ jenkins  │ v1.37.0 │ 13 Oct 25 14:06 UTC │ 13 Oct 25 14:06 UTC │
	│ start      │ -p scheduled-stop-902075 --memory=3072 --driver=docker  --container-runtime=docker                                                                          │ scheduled-stop-902075 │ jenkins  │ v1.37.0 │ 13 Oct 25 14:06 UTC │ 13 Oct 25 14:07 UTC │
	│ stop       │ -p scheduled-stop-902075 --schedule 5m                                                                                                                      │ scheduled-stop-902075 │ jenkins  │ v1.37.0 │ 13 Oct 25 14:07 UTC │                     │
	│ stop       │ -p scheduled-stop-902075 --schedule 5m                                                                                                                      │ scheduled-stop-902075 │ jenkins  │ v1.37.0 │ 13 Oct 25 14:07 UTC │                     │
	│ stop       │ -p scheduled-stop-902075 --schedule 5m                                                                                                                      │ scheduled-stop-902075 │ jenkins  │ v1.37.0 │ 13 Oct 25 14:07 UTC │                     │
	│ stop       │ -p scheduled-stop-902075 --schedule 15s                                                                                                                     │ scheduled-stop-902075 │ jenkins  │ v1.37.0 │ 13 Oct 25 14:07 UTC │                     │
	│ stop       │ -p scheduled-stop-902075 --schedule 15s                                                                                                                     │ scheduled-stop-902075 │ jenkins  │ v1.37.0 │ 13 Oct 25 14:07 UTC │                     │
	│ stop       │ -p scheduled-stop-902075 --schedule 15s                                                                                                                     │ scheduled-stop-902075 │ jenkins  │ v1.37.0 │ 13 Oct 25 14:07 UTC │                     │
	│ stop       │ -p scheduled-stop-902075 --cancel-scheduled                                                                                                                 │ scheduled-stop-902075 │ jenkins  │ v1.37.0 │ 13 Oct 25 14:07 UTC │ 13 Oct 25 14:07 UTC │
	│ stop       │ -p scheduled-stop-902075 --schedule 15s                                                                                                                     │ scheduled-stop-902075 │ jenkins  │ v1.37.0 │ 13 Oct 25 14:07 UTC │                     │
	│ stop       │ -p scheduled-stop-902075 --schedule 15s                                                                                                                     │ scheduled-stop-902075 │ jenkins  │ v1.37.0 │ 13 Oct 25 14:07 UTC │                     │
	│ stop       │ -p scheduled-stop-902075 --schedule 15s                                                                                                                     │ scheduled-stop-902075 │ jenkins  │ v1.37.0 │ 13 Oct 25 14:07 UTC │ 13 Oct 25 14:08 UTC │
	│ delete     │ -p scheduled-stop-902075                                                                                                                                    │ scheduled-stop-902075 │ jenkins  │ v1.37.0 │ 13 Oct 25 14:08 UTC │ 13 Oct 25 14:08 UTC │
	│ start      │ -p skaffold-600759 --memory=3072 --driver=docker  --container-runtime=docker                                                                                │ skaffold-600759       │ jenkins  │ v1.37.0 │ 13 Oct 25 14:08 UTC │ 13 Oct 25 14:08 UTC │
	│ docker-env │ --shell none -p skaffold-600759 --user=skaffold                                                                                                             │ skaffold-600759       │ skaffold │ v1.37.0 │ 13 Oct 25 14:08 UTC │ 13 Oct 25 14:09 UTC │
	└────────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────┴──────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/13 14:08:35
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1013 14:08:35.922555 1086283 out.go:360] Setting OutFile to fd 1 ...
	I1013 14:08:35.922638 1086283 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 14:08:35.922641 1086283 out.go:374] Setting ErrFile to fd 2...
	I1013 14:08:35.922644 1086283 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 14:08:35.922837 1086283 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21724-845765/.minikube/bin
	I1013 14:08:35.923337 1086283 out.go:368] Setting JSON to false
	I1013 14:08:35.924329 1086283 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":24649,"bootTime":1760339867,"procs":194,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1013 14:08:35.924422 1086283 start.go:141] virtualization: kvm guest
	I1013 14:08:35.926680 1086283 out.go:179] * [skaffold-600759] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1013 14:08:35.927861 1086283 out.go:179]   - MINIKUBE_LOCATION=21724
	I1013 14:08:35.927878 1086283 notify.go:220] Checking for updates...
	I1013 14:08:35.929727 1086283 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1013 14:08:35.930713 1086283 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21724-845765/kubeconfig
	I1013 14:08:35.932189 1086283 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21724-845765/.minikube
	I1013 14:08:35.933105 1086283 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1013 14:08:35.934008 1086283 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1013 14:08:35.934989 1086283 driver.go:421] Setting default libvirt URI to qemu:///system
	I1013 14:08:35.958782 1086283 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1013 14:08:35.958860 1086283 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1013 14:08:36.012897 1086283 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-13 14:08:36.003302012 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1013 14:08:36.012995 1086283 docker.go:318] overlay module found
	I1013 14:08:36.014560 1086283 out.go:179] * Using the docker driver based on user configuration
	I1013 14:08:36.015561 1086283 start.go:305] selected driver: docker
	I1013 14:08:36.015567 1086283 start.go:925] validating driver "docker" against <nil>
	I1013 14:08:36.015576 1086283 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1013 14:08:36.016123 1086283 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1013 14:08:36.073209 1086283 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-13 14:08:36.063498532 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1013 14:08:36.073378 1086283 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1013 14:08:36.073579 1086283 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1013 14:08:36.075199 1086283 out.go:179] * Using Docker driver with root privileges
	I1013 14:08:36.076165 1086283 cni.go:84] Creating CNI manager for ""
	I1013 14:08:36.076225 1086283 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1013 14:08:36.076233 1086283 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1013 14:08:36.076295 1086283 start.go:349] cluster config:
	{Name:skaffold-600759 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:skaffold-600759 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRun
time:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1013 14:08:36.077574 1086283 out.go:179] * Starting "skaffold-600759" primary control-plane node in "skaffold-600759" cluster
	I1013 14:08:36.078614 1086283 cache.go:123] Beginning downloading kic base image for docker with docker
	I1013 14:08:36.079788 1086283 out.go:179] * Pulling base image v0.0.48-1759745255-21703 ...
	I1013 14:08:36.080822 1086283 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime docker
	I1013 14:08:36.080865 1086283 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21724-845765/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-docker-overlay2-amd64.tar.lz4
	I1013 14:08:36.080872 1086283 cache.go:58] Caching tarball of preloaded images
	I1013 14:08:36.080921 1086283 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon
	I1013 14:08:36.080974 1086283 preload.go:233] Found /home/jenkins/minikube-integration/21724-845765/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1013 14:08:36.080984 1086283 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on docker
	I1013 14:08:36.081392 1086283 profile.go:143] Saving config to /home/jenkins/minikube-integration/21724-845765/.minikube/profiles/skaffold-600759/config.json ...
	I1013 14:08:36.081423 1086283 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-845765/.minikube/profiles/skaffold-600759/config.json: {Name:mk58ae9485859341196626921a5f8128471ddab3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 14:08:36.100365 1086283 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon, skipping pull
	I1013 14:08:36.100389 1086283 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 exists in daemon, skipping load
	I1013 14:08:36.100404 1086283 cache.go:232] Successfully downloaded all kic artifacts
	I1013 14:08:36.100426 1086283 start.go:360] acquireMachinesLock for skaffold-600759: {Name:mke496305f5e5c038a027d04d6cd8b1852188c64 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1013 14:08:36.100517 1086283 start.go:364] duration metric: took 79.62µs to acquireMachinesLock for "skaffold-600759"
	I1013 14:08:36.100536 1086283 start.go:93] Provisioning new machine with config: &{Name:skaffold-600759 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:skaffold-600759 Namespace:default APIServerHAVIP: APIServerName
:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSH
AgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1013 14:08:36.100595 1086283 start.go:125] createHost starting for "" (driver="docker")
	I1013 14:08:36.102215 1086283 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1013 14:08:36.102470 1086283 start.go:159] libmachine.API.Create for "skaffold-600759" (driver="docker")
	I1013 14:08:36.102492 1086283 client.go:168] LocalClient.Create starting
	I1013 14:08:36.102575 1086283 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21724-845765/.minikube/certs/ca.pem
	I1013 14:08:36.102602 1086283 main.go:141] libmachine: Decoding PEM data...
	I1013 14:08:36.102616 1086283 main.go:141] libmachine: Parsing certificate...
	I1013 14:08:36.102676 1086283 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21724-845765/.minikube/certs/cert.pem
	I1013 14:08:36.102689 1086283 main.go:141] libmachine: Decoding PEM data...
	I1013 14:08:36.102695 1086283 main.go:141] libmachine: Parsing certificate...
	I1013 14:08:36.103005 1086283 cli_runner.go:164] Run: docker network inspect skaffold-600759 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1013 14:08:36.118807 1086283 cli_runner.go:211] docker network inspect skaffold-600759 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1013 14:08:36.118870 1086283 network_create.go:284] running [docker network inspect skaffold-600759] to gather additional debugging logs...
	I1013 14:08:36.118883 1086283 cli_runner.go:164] Run: docker network inspect skaffold-600759
	W1013 14:08:36.135784 1086283 cli_runner.go:211] docker network inspect skaffold-600759 returned with exit code 1
	I1013 14:08:36.135798 1086283 network_create.go:287] error running [docker network inspect skaffold-600759]: docker network inspect skaffold-600759: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network skaffold-600759 not found
	I1013 14:08:36.135809 1086283 network_create.go:289] output of [docker network inspect skaffold-600759]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network skaffold-600759 not found
	
	** /stderr **
	I1013 14:08:36.135891 1086283 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1013 14:08:36.152435 1086283 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-ef0be46c41b2 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:86:64:18:f7:35:96} reservation:<nil>}
	I1013 14:08:36.152919 1086283 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-55c6e9b40aad IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:82:d2:9c:d4:2e:2c} reservation:<nil>}
	I1013 14:08:36.153466 1086283 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-86d040a1ec93 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:fe:91:83:6e:42:82} reservation:<nil>}
	I1013 14:08:36.154210 1086283 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001d67a30}
	I1013 14:08:36.154229 1086283 network_create.go:124] attempt to create docker network skaffold-600759 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1013 14:08:36.154279 1086283 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=skaffold-600759 skaffold-600759
	I1013 14:08:36.209577 1086283 network_create.go:108] docker network skaffold-600759 192.168.76.0/24 created
	I1013 14:08:36.209606 1086283 kic.go:121] calculated static IP "192.168.76.2" for the "skaffold-600759" container
	I1013 14:08:36.209677 1086283 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1013 14:08:36.224859 1086283 cli_runner.go:164] Run: docker volume create skaffold-600759 --label name.minikube.sigs.k8s.io=skaffold-600759 --label created_by.minikube.sigs.k8s.io=true
	I1013 14:08:36.241679 1086283 oci.go:103] Successfully created a docker volume skaffold-600759
	I1013 14:08:36.241761 1086283 cli_runner.go:164] Run: docker run --rm --name skaffold-600759-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=skaffold-600759 --entrypoint /usr/bin/test -v skaffold-600759:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -d /var/lib
	I1013 14:08:36.875454 1086283 oci.go:107] Successfully prepared a docker volume skaffold-600759
	I1013 14:08:36.875494 1086283 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime docker
	I1013 14:08:36.875514 1086283 kic.go:194] Starting extracting preloaded images to volume ...
	I1013 14:08:36.875585 1086283 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21724-845765/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v skaffold-600759:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -I lz4 -xf /preloaded.tar -C /extractDir
	I1013 14:08:40.648896 1086283 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21724-845765/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v skaffold-600759:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -I lz4 -xf /preloaded.tar -C /extractDir: (3.773246193s)
	I1013 14:08:40.648922 1086283 kic.go:203] duration metric: took 3.773404109s to extract preloaded images to volume ...
	W1013 14:08:40.649002 1086283 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1013 14:08:40.649033 1086283 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1013 14:08:40.649067 1086283 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1013 14:08:40.706658 1086283 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname skaffold-600759 --name skaffold-600759 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=skaffold-600759 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=skaffold-600759 --network skaffold-600759 --ip 192.168.76.2 --volume skaffold-600759:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92
	I1013 14:08:40.966190 1086283 cli_runner.go:164] Run: docker container inspect skaffold-600759 --format={{.State.Running}}
	I1013 14:08:40.983946 1086283 cli_runner.go:164] Run: docker container inspect skaffold-600759 --format={{.State.Status}}
	I1013 14:08:41.001236 1086283 cli_runner.go:164] Run: docker exec skaffold-600759 stat /var/lib/dpkg/alternatives/iptables
	I1013 14:08:41.045246 1086283 oci.go:144] the created container "skaffold-600759" has a running status.
	I1013 14:08:41.045271 1086283 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21724-845765/.minikube/machines/skaffold-600759/id_rsa...
	I1013 14:08:41.658406 1086283 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21724-845765/.minikube/machines/skaffold-600759/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1013 14:08:41.682303 1086283 cli_runner.go:164] Run: docker container inspect skaffold-600759 --format={{.State.Status}}
	I1013 14:08:41.700347 1086283 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1013 14:08:41.700364 1086283 kic_runner.go:114] Args: [docker exec --privileged skaffold-600759 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1013 14:08:41.745747 1086283 cli_runner.go:164] Run: docker container inspect skaffold-600759 --format={{.State.Status}}
	I1013 14:08:41.763292 1086283 machine.go:93] provisionDockerMachine start ...
	I1013 14:08:41.763368 1086283 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-600759
	I1013 14:08:41.781007 1086283 main.go:141] libmachine: Using SSH client type: native
	I1013 14:08:41.781324 1086283 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33348 <nil> <nil>}
	I1013 14:08:41.781363 1086283 main.go:141] libmachine: About to run SSH command:
	hostname
	I1013 14:08:41.928441 1086283 main.go:141] libmachine: SSH cmd err, output: <nil>: skaffold-600759
	
	I1013 14:08:41.928466 1086283 ubuntu.go:182] provisioning hostname "skaffold-600759"
	I1013 14:08:41.928558 1086283 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-600759
	I1013 14:08:41.946420 1086283 main.go:141] libmachine: Using SSH client type: native
	I1013 14:08:41.946669 1086283 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33348 <nil> <nil>}
	I1013 14:08:41.946677 1086283 main.go:141] libmachine: About to run SSH command:
	sudo hostname skaffold-600759 && echo "skaffold-600759" | sudo tee /etc/hostname
	I1013 14:08:42.104760 1086283 main.go:141] libmachine: SSH cmd err, output: <nil>: skaffold-600759
	
	I1013 14:08:42.104828 1086283 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-600759
	I1013 14:08:42.122455 1086283 main.go:141] libmachine: Using SSH client type: native
	I1013 14:08:42.122659 1086283 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33348 <nil> <nil>}
	I1013 14:08:42.122670 1086283 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sskaffold-600759' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 skaffold-600759/g' /etc/hosts;
				else 
					echo '127.0.1.1 skaffold-600759' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1013 14:08:42.270647 1086283 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1013 14:08:42.270670 1086283 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21724-845765/.minikube CaCertPath:/home/jenkins/minikube-integration/21724-845765/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21724-845765/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21724-845765/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21724-845765/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21724-845765/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21724-845765/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21724-845765/.minikube}
	I1013 14:08:42.270695 1086283 ubuntu.go:190] setting up certificates
	I1013 14:08:42.270706 1086283 provision.go:84] configureAuth start
	I1013 14:08:42.270775 1086283 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" skaffold-600759
	I1013 14:08:42.288880 1086283 provision.go:143] copyHostCerts
	I1013 14:08:42.288954 1086283 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-845765/.minikube/ca.pem, removing ...
	I1013 14:08:42.288963 1086283 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-845765/.minikube/ca.pem
	I1013 14:08:42.289042 1086283 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-845765/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21724-845765/.minikube/ca.pem (1078 bytes)
	I1013 14:08:42.289263 1086283 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-845765/.minikube/cert.pem, removing ...
	I1013 14:08:42.289274 1086283 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-845765/.minikube/cert.pem
	I1013 14:08:42.289334 1086283 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-845765/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21724-845765/.minikube/cert.pem (1123 bytes)
	I1013 14:08:42.289441 1086283 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-845765/.minikube/key.pem, removing ...
	I1013 14:08:42.289446 1086283 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-845765/.minikube/key.pem
	I1013 14:08:42.289484 1086283 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-845765/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21724-845765/.minikube/key.pem (1675 bytes)
	I1013 14:08:42.289561 1086283 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21724-845765/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21724-845765/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21724-845765/.minikube/certs/ca-key.pem org=jenkins.skaffold-600759 san=[127.0.0.1 192.168.76.2 localhost minikube skaffold-600759]
	I1013 14:08:42.571976 1086283 provision.go:177] copyRemoteCerts
	I1013 14:08:42.572037 1086283 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1013 14:08:42.572078 1086283 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-600759
	I1013 14:08:42.590262 1086283 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33348 SSHKeyPath:/home/jenkins/minikube-integration/21724-845765/.minikube/machines/skaffold-600759/id_rsa Username:docker}
	I1013 14:08:42.695322 1086283 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-845765/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1013 14:08:42.716052 1086283 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-845765/.minikube/machines/server.pem --> /etc/docker/server.pem (1212 bytes)
	I1013 14:08:42.735009 1086283 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-845765/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1013 14:08:42.754078 1086283 provision.go:87] duration metric: took 483.355244ms to configureAuth
	I1013 14:08:42.754123 1086283 ubuntu.go:206] setting minikube options for container-runtime
	I1013 14:08:42.754293 1086283 config.go:182] Loaded profile config "skaffold-600759": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
	I1013 14:08:42.754338 1086283 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-600759
	I1013 14:08:42.773776 1086283 main.go:141] libmachine: Using SSH client type: native
	I1013 14:08:42.773986 1086283 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33348 <nil> <nil>}
	I1013 14:08:42.773992 1086283 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1013 14:08:42.923485 1086283 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1013 14:08:42.923506 1086283 ubuntu.go:71] root file system type: overlay
	I1013 14:08:42.923657 1086283 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1013 14:08:42.923744 1086283 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-600759
	I1013 14:08:42.942220 1086283 main.go:141] libmachine: Using SSH client type: native
	I1013 14:08:42.942435 1086283 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33348 <nil> <nil>}
	I1013 14:08:42.942497 1086283 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1013 14:08:43.105542 1086283 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I1013 14:08:43.105637 1086283 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-600759
	I1013 14:08:43.124645 1086283 main.go:141] libmachine: Using SSH client type: native
	I1013 14:08:43.124916 1086283 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33348 <nil> <nil>}
	I1013 14:08:43.124936 1086283 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1013 14:08:44.347433 1086283 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2025-10-02 14:52:52.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2025-10-13 14:08:43.102309704 +0000
	@@ -9,23 +9,34 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	 Restart=always
	 
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	+
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I1013 14:08:44.347472 1086283 machine.go:96] duration metric: took 2.584163977s to provisionDockerMachine
	I1013 14:08:44.347488 1086283 client.go:171] duration metric: took 8.244990824s to LocalClient.Create
	I1013 14:08:44.347515 1086283 start.go:167] duration metric: took 8.245044188s to libmachine.API.Create "skaffold-600759"
	I1013 14:08:44.347524 1086283 start.go:293] postStartSetup for "skaffold-600759" (driver="docker")
	I1013 14:08:44.347538 1086283 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1013 14:08:44.347610 1086283 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1013 14:08:44.347658 1086283 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-600759
	I1013 14:08:44.366367 1086283 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33348 SSHKeyPath:/home/jenkins/minikube-integration/21724-845765/.minikube/machines/skaffold-600759/id_rsa Username:docker}
	I1013 14:08:44.473362 1086283 ssh_runner.go:195] Run: cat /etc/os-release
	I1013 14:08:44.477201 1086283 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1013 14:08:44.477219 1086283 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1013 14:08:44.477230 1086283 filesync.go:126] Scanning /home/jenkins/minikube-integration/21724-845765/.minikube/addons for local assets ...
	I1013 14:08:44.477281 1086283 filesync.go:126] Scanning /home/jenkins/minikube-integration/21724-845765/.minikube/files for local assets ...
	I1013 14:08:44.477351 1086283 filesync.go:149] local asset: /home/jenkins/minikube-integration/21724-845765/.minikube/files/etc/ssl/certs/8494012.pem -> 8494012.pem in /etc/ssl/certs
	I1013 14:08:44.477446 1086283 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1013 14:08:44.485855 1086283 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-845765/.minikube/files/etc/ssl/certs/8494012.pem --> /etc/ssl/certs/8494012.pem (1708 bytes)
	I1013 14:08:44.507673 1086283 start.go:296] duration metric: took 160.130745ms for postStartSetup
	I1013 14:08:44.508054 1086283 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" skaffold-600759
	I1013 14:08:44.526245 1086283 profile.go:143] Saving config to /home/jenkins/minikube-integration/21724-845765/.minikube/profiles/skaffold-600759/config.json ...
	I1013 14:08:44.526526 1086283 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1013 14:08:44.526567 1086283 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-600759
	I1013 14:08:44.544475 1086283 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33348 SSHKeyPath:/home/jenkins/minikube-integration/21724-845765/.minikube/machines/skaffold-600759/id_rsa Username:docker}
	I1013 14:08:44.646812 1086283 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1013 14:08:44.651922 1086283 start.go:128] duration metric: took 8.551310055s to createHost
	I1013 14:08:44.651943 1086283 start.go:83] releasing machines lock for "skaffold-600759", held for 8.551417925s
	I1013 14:08:44.652021 1086283 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" skaffold-600759
	I1013 14:08:44.669860 1086283 ssh_runner.go:195] Run: cat /version.json
	I1013 14:08:44.669890 1086283 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1013 14:08:44.669904 1086283 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-600759
	I1013 14:08:44.669966 1086283 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-600759
	I1013 14:08:44.688877 1086283 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33348 SSHKeyPath:/home/jenkins/minikube-integration/21724-845765/.minikube/machines/skaffold-600759/id_rsa Username:docker}
	I1013 14:08:44.689464 1086283 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33348 SSHKeyPath:/home/jenkins/minikube-integration/21724-845765/.minikube/machines/skaffold-600759/id_rsa Username:docker}
	I1013 14:08:44.789910 1086283 ssh_runner.go:195] Run: systemctl --version
	I1013 14:08:44.848111 1086283 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1013 14:08:44.853349 1086283 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1013 14:08:44.853430 1086283 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1013 14:08:44.881490 1086283 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1013 14:08:44.881514 1086283 start.go:495] detecting cgroup driver to use...
	I1013 14:08:44.881553 1086283 detect.go:190] detected "systemd" cgroup driver on host os
	I1013 14:08:44.881678 1086283 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1013 14:08:44.898246 1086283 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1013 14:08:44.909661 1086283 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1013 14:08:44.919482 1086283 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I1013 14:08:44.919540 1086283 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I1013 14:08:44.929162 1086283 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1013 14:08:44.938594 1086283 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1013 14:08:44.948391 1086283 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1013 14:08:44.958316 1086283 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1013 14:08:44.967543 1086283 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1013 14:08:44.977283 1086283 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1013 14:08:44.987409 1086283 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1013 14:08:44.997768 1086283 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1013 14:08:45.005876 1086283 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1013 14:08:45.014037 1086283 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1013 14:08:45.094383 1086283 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1013 14:08:45.171556 1086283 start.go:495] detecting cgroup driver to use...
	I1013 14:08:45.171601 1086283 detect.go:190] detected "systemd" cgroup driver on host os
	I1013 14:08:45.171654 1086283 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1013 14:08:45.185528 1086283 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1013 14:08:45.198805 1086283 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1013 14:08:45.216403 1086283 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1013 14:08:45.229591 1086283 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1013 14:08:45.243295 1086283 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1013 14:08:45.258766 1086283 ssh_runner.go:195] Run: which cri-dockerd
	I1013 14:08:45.262880 1086283 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1013 14:08:45.273871 1086283 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I1013 14:08:45.287662 1086283 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1013 14:08:45.370858 1086283 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1013 14:08:45.452263 1086283 docker.go:575] configuring docker to use "systemd" as cgroup driver...
	I1013 14:08:45.452373 1086283 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (129 bytes)
	I1013 14:08:45.466738 1086283 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I1013 14:08:45.479698 1086283 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1013 14:08:45.565671 1086283 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1013 14:08:46.392608 1086283 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1013 14:08:46.406393 1086283 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1013 14:08:46.420579 1086283 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1013 14:08:46.434768 1086283 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1013 14:08:46.525313 1086283 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1013 14:08:46.615312 1086283 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1013 14:08:46.702670 1086283 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1013 14:08:46.733765 1086283 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I1013 14:08:46.747902 1086283 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1013 14:08:46.830429 1086283 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1013 14:08:46.907369 1086283 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1013 14:08:46.921489 1086283 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1013 14:08:46.921553 1086283 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1013 14:08:46.926129 1086283 start.go:563] Will wait 60s for crictl version
	I1013 14:08:46.926212 1086283 ssh_runner.go:195] Run: which crictl
	I1013 14:08:46.930314 1086283 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1013 14:08:46.958471 1086283 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  28.5.0
	RuntimeApiVersion:  v1
	I1013 14:08:46.958520 1086283 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1013 14:08:46.986579 1086283 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1013 14:08:47.016332 1086283 out.go:252] * Preparing Kubernetes v1.34.1 on Docker 28.5.0 ...
	I1013 14:08:47.016402 1086283 cli_runner.go:164] Run: docker network inspect skaffold-600759 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1013 14:08:47.034283 1086283 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1013 14:08:47.038966 1086283 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1013 14:08:47.049954 1086283 kubeadm.go:883] updating cluster {Name:skaffold-600759 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:skaffold-600759 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServ
erNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPI
D:0 GPUs: AutoPauseInterval:1m0s} ...
	I1013 14:08:47.050060 1086283 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime docker
	I1013 14:08:47.050130 1086283 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1013 14:08:47.073626 1086283 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-scheduler:v1.34.1
	registry.k8s.io/kube-apiserver:v1.34.1
	registry.k8s.io/kube-controller-manager:v1.34.1
	registry.k8s.io/kube-proxy:v1.34.1
	registry.k8s.io/etcd:3.6.4-0
	registry.k8s.io/pause:3.10.1
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1013 14:08:47.073639 1086283 docker.go:621] Images already preloaded, skipping extraction
	I1013 14:08:47.073693 1086283 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1013 14:08:47.096076 1086283 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-scheduler:v1.34.1
	registry.k8s.io/kube-apiserver:v1.34.1
	registry.k8s.io/kube-controller-manager:v1.34.1
	registry.k8s.io/kube-proxy:v1.34.1
	registry.k8s.io/etcd:3.6.4-0
	registry.k8s.io/pause:3.10.1
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1013 14:08:47.096109 1086283 cache_images.go:85] Images are preloaded, skipping loading
	I1013 14:08:47.096138 1086283 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.34.1 docker true true} ...
	I1013 14:08:47.096242 1086283 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=skaffold-600759 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:skaffold-600759 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1013 14:08:47.096297 1086283 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1013 14:08:47.150071 1086283 cni.go:84] Creating CNI manager for ""
	I1013 14:08:47.150116 1086283 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1013 14:08:47.150137 1086283 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1013 14:08:47.150157 1086283 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:skaffold-600759 NodeName:skaffold-600759 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/ku
bernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1013 14:08:47.150279 1086283 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "skaffold-600759"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1013 14:08:47.150338 1086283 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1013 14:08:47.159249 1086283 binaries.go:44] Found k8s binaries, skipping transfer
	I1013 14:08:47.159315 1086283 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1013 14:08:47.167601 1086283 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (314 bytes)
	I1013 14:08:47.181241 1086283 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1013 14:08:47.194629 1086283 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2215 bytes)
	I1013 14:08:47.207935 1086283 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1013 14:08:47.212018 1086283 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1013 14:08:47.223114 1086283 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1013 14:08:47.306434 1086283 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1013 14:08:47.332778 1086283 certs.go:69] Setting up /home/jenkins/minikube-integration/21724-845765/.minikube/profiles/skaffold-600759 for IP: 192.168.76.2
	I1013 14:08:47.332793 1086283 certs.go:195] generating shared ca certs ...
	I1013 14:08:47.332813 1086283 certs.go:227] acquiring lock for ca certs: {Name:mk51a15d90077d4d48a4378abd8bb6ade742ad6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 14:08:47.332976 1086283 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21724-845765/.minikube/ca.key
	I1013 14:08:47.333043 1086283 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21724-845765/.minikube/proxy-client-ca.key
	I1013 14:08:47.333053 1086283 certs.go:257] generating profile certs ...
	I1013 14:08:47.333139 1086283 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21724-845765/.minikube/profiles/skaffold-600759/client.key
	I1013 14:08:47.333148 1086283 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21724-845765/.minikube/profiles/skaffold-600759/client.crt with IP's: []
	I1013 14:08:47.700389 1086283 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21724-845765/.minikube/profiles/skaffold-600759/client.crt ...
	I1013 14:08:47.700410 1086283 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-845765/.minikube/profiles/skaffold-600759/client.crt: {Name:mkbb431e08bf484811890407f0abe3e51f985034 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 14:08:47.700613 1086283 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21724-845765/.minikube/profiles/skaffold-600759/client.key ...
	I1013 14:08:47.700620 1086283 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-845765/.minikube/profiles/skaffold-600759/client.key: {Name:mkc935bc5c7aa2d56c8f28ed99a4b2d46fee42e9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 14:08:47.700707 1086283 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21724-845765/.minikube/profiles/skaffold-600759/apiserver.key.58ae2b95
	I1013 14:08:47.700719 1086283 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21724-845765/.minikube/profiles/skaffold-600759/apiserver.crt.58ae2b95 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1013 14:08:48.806518 1086283 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21724-845765/.minikube/profiles/skaffold-600759/apiserver.crt.58ae2b95 ...
	I1013 14:08:48.806539 1086283 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-845765/.minikube/profiles/skaffold-600759/apiserver.crt.58ae2b95: {Name:mkf4341de32540907c173f93726610aec506f733 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 14:08:48.806713 1086283 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21724-845765/.minikube/profiles/skaffold-600759/apiserver.key.58ae2b95 ...
	I1013 14:08:48.806721 1086283 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-845765/.minikube/profiles/skaffold-600759/apiserver.key.58ae2b95: {Name:mk3d18ff48695ef27aed2eb30b60ddab347320b1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 14:08:48.806797 1086283 certs.go:382] copying /home/jenkins/minikube-integration/21724-845765/.minikube/profiles/skaffold-600759/apiserver.crt.58ae2b95 -> /home/jenkins/minikube-integration/21724-845765/.minikube/profiles/skaffold-600759/apiserver.crt
	I1013 14:08:48.806865 1086283 certs.go:386] copying /home/jenkins/minikube-integration/21724-845765/.minikube/profiles/skaffold-600759/apiserver.key.58ae2b95 -> /home/jenkins/minikube-integration/21724-845765/.minikube/profiles/skaffold-600759/apiserver.key
	I1013 14:08:48.806944 1086283 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21724-845765/.minikube/profiles/skaffold-600759/proxy-client.key
	I1013 14:08:48.806961 1086283 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21724-845765/.minikube/profiles/skaffold-600759/proxy-client.crt with IP's: []
	I1013 14:08:48.981703 1086283 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21724-845765/.minikube/profiles/skaffold-600759/proxy-client.crt ...
	I1013 14:08:48.981722 1086283 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-845765/.minikube/profiles/skaffold-600759/proxy-client.crt: {Name:mk50c1e4fe0257783240bb92a889f9d60a6e497a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 14:08:48.981905 1086283 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21724-845765/.minikube/profiles/skaffold-600759/proxy-client.key ...
	I1013 14:08:48.981912 1086283 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-845765/.minikube/profiles/skaffold-600759/proxy-client.key: {Name:mkb5fd25ab20a93b76ba9a577d0ea5b4b05d3112 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 14:08:48.982110 1086283 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-845765/.minikube/certs/849401.pem (1338 bytes)
	W1013 14:08:48.982144 1086283 certs.go:480] ignoring /home/jenkins/minikube-integration/21724-845765/.minikube/certs/849401_empty.pem, impossibly tiny 0 bytes
	I1013 14:08:48.982151 1086283 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-845765/.minikube/certs/ca-key.pem (1675 bytes)
	I1013 14:08:48.982170 1086283 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-845765/.minikube/certs/ca.pem (1078 bytes)
	I1013 14:08:48.982188 1086283 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-845765/.minikube/certs/cert.pem (1123 bytes)
	I1013 14:08:48.982212 1086283 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-845765/.minikube/certs/key.pem (1675 bytes)
	I1013 14:08:48.982255 1086283 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-845765/.minikube/files/etc/ssl/certs/8494012.pem (1708 bytes)
	I1013 14:08:48.982902 1086283 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-845765/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1013 14:08:49.003306 1086283 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-845765/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1013 14:08:49.022917 1086283 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-845765/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1013 14:08:49.043149 1086283 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-845765/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1013 14:08:49.062193 1086283 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-845765/.minikube/profiles/skaffold-600759/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1013 14:08:49.081137 1086283 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-845765/.minikube/profiles/skaffold-600759/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1013 14:08:49.101071 1086283 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-845765/.minikube/profiles/skaffold-600759/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1013 14:08:49.121688 1086283 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-845765/.minikube/profiles/skaffold-600759/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1013 14:08:49.140531 1086283 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-845765/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1013 14:08:49.162565 1086283 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-845765/.minikube/certs/849401.pem --> /usr/share/ca-certificates/849401.pem (1338 bytes)
	I1013 14:08:49.182449 1086283 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-845765/.minikube/files/etc/ssl/certs/8494012.pem --> /usr/share/ca-certificates/8494012.pem (1708 bytes)
	I1013 14:08:49.202000 1086283 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1013 14:08:49.216362 1086283 ssh_runner.go:195] Run: openssl version
	I1013 14:08:49.223121 1086283 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1013 14:08:49.232356 1086283 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1013 14:08:49.236425 1086283 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 13 13:34 /usr/share/ca-certificates/minikubeCA.pem
	I1013 14:08:49.236477 1086283 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1013 14:08:49.271768 1086283 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1013 14:08:49.281372 1086283 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/849401.pem && ln -fs /usr/share/ca-certificates/849401.pem /etc/ssl/certs/849401.pem"
	I1013 14:08:49.290019 1086283 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/849401.pem
	I1013 14:08:49.294041 1086283 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 13 13:39 /usr/share/ca-certificates/849401.pem
	I1013 14:08:49.294137 1086283 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/849401.pem
	I1013 14:08:49.328452 1086283 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/849401.pem /etc/ssl/certs/51391683.0"
	I1013 14:08:49.337814 1086283 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8494012.pem && ln -fs /usr/share/ca-certificates/8494012.pem /etc/ssl/certs/8494012.pem"
	I1013 14:08:49.346268 1086283 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8494012.pem
	I1013 14:08:49.350028 1086283 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 13 13:39 /usr/share/ca-certificates/8494012.pem
	I1013 14:08:49.350067 1086283 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8494012.pem
	I1013 14:08:49.384347 1086283 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/8494012.pem /etc/ssl/certs/3ec20f2e.0"
	I1013 14:08:49.392736 1086283 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1013 14:08:49.396371 1086283 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1013 14:08:49.396414 1086283 kubeadm.go:400] StartCluster: {Name:skaffold-600759 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:skaffold-600759 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I1013 14:08:49.396510 1086283 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1013 14:08:49.415872 1086283 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1013 14:08:49.423600 1086283 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1013 14:08:49.431340 1086283 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1013 14:08:49.431395 1086283 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1013 14:08:49.438725 1086283 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1013 14:08:49.438735 1086283 kubeadm.go:157] found existing configuration files:
	
	I1013 14:08:49.438778 1086283 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1013 14:08:49.446074 1086283 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1013 14:08:49.446126 1086283 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1013 14:08:49.453048 1086283 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1013 14:08:49.460293 1086283 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1013 14:08:49.460329 1086283 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1013 14:08:49.467361 1086283 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1013 14:08:49.474685 1086283 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1013 14:08:49.474724 1086283 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1013 14:08:49.481777 1086283 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1013 14:08:49.489031 1086283 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1013 14:08:49.489074 1086283 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1013 14:08:49.496033 1086283 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1013 14:08:49.562425 1086283 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1013 14:08:49.620325 1086283 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1013 14:08:58.612716 1086283 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1013 14:08:58.612759 1086283 kubeadm.go:318] [preflight] Running pre-flight checks
	I1013 14:08:58.612835 1086283 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1013 14:08:58.612876 1086283 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1013 14:08:58.612902 1086283 kubeadm.go:318] OS: Linux
	I1013 14:08:58.612936 1086283 kubeadm.go:318] CGROUPS_CPU: enabled
	I1013 14:08:58.612971 1086283 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1013 14:08:58.613039 1086283 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1013 14:08:58.613098 1086283 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1013 14:08:58.613151 1086283 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1013 14:08:58.613194 1086283 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1013 14:08:58.613232 1086283 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1013 14:08:58.613273 1086283 kubeadm.go:318] CGROUPS_IO: enabled
	I1013 14:08:58.613338 1086283 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1013 14:08:58.613457 1086283 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1013 14:08:58.613570 1086283 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1013 14:08:58.613629 1086283 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1013 14:08:58.615653 1086283 out.go:252]   - Generating certificates and keys ...
	I1013 14:08:58.615719 1086283 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1013 14:08:58.615768 1086283 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1013 14:08:58.615818 1086283 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1013 14:08:58.615860 1086283 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1013 14:08:58.615905 1086283 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1013 14:08:58.615943 1086283 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1013 14:08:58.616004 1086283 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1013 14:08:58.616157 1086283 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [localhost skaffold-600759] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1013 14:08:58.616221 1086283 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1013 14:08:58.616335 1086283 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [localhost skaffold-600759] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1013 14:08:58.616395 1086283 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1013 14:08:58.616463 1086283 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1013 14:08:58.616500 1086283 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1013 14:08:58.616544 1086283 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1013 14:08:58.616610 1086283 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1013 14:08:58.616716 1086283 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1013 14:08:58.616768 1086283 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1013 14:08:58.616820 1086283 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1013 14:08:58.616861 1086283 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1013 14:08:58.616940 1086283 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1013 14:08:58.617009 1086283 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1013 14:08:58.617983 1086283 out.go:252]   - Booting up control plane ...
	I1013 14:08:58.618055 1086283 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1013 14:08:58.618158 1086283 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1013 14:08:58.618225 1086283 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1013 14:08:58.618310 1086283 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1013 14:08:58.618384 1086283 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1013 14:08:58.618477 1086283 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1013 14:08:58.618542 1086283 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1013 14:08:58.618571 1086283 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1013 14:08:58.618701 1086283 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1013 14:08:58.618816 1086283 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1013 14:08:58.618894 1086283 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 501.842925ms
	I1013 14:08:58.619012 1086283 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1013 14:08:58.619073 1086283 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1013 14:08:58.619200 1086283 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1013 14:08:58.619268 1086283 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1013 14:08:58.619341 1086283 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 1.385876401s
	I1013 14:08:58.619407 1086283 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 2.005059747s
	I1013 14:08:58.619479 1086283 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 3.501184692s
	I1013 14:08:58.619570 1086283 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1013 14:08:58.619709 1086283 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1013 14:08:58.619760 1086283 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1013 14:08:58.619943 1086283 kubeadm.go:318] [mark-control-plane] Marking the node skaffold-600759 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1013 14:08:58.620015 1086283 kubeadm.go:318] [bootstrap-token] Using token: piyn5s.2kp6jsrawp1uyq9s
	I1013 14:08:58.621146 1086283 out.go:252]   - Configuring RBAC rules ...
	I1013 14:08:58.621250 1086283 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1013 14:08:58.621319 1086283 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1013 14:08:58.621452 1086283 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1013 14:08:58.621598 1086283 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1013 14:08:58.621754 1086283 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1013 14:08:58.621831 1086283 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1013 14:08:58.621932 1086283 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1013 14:08:58.621981 1086283 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1013 14:08:58.622025 1086283 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1013 14:08:58.622027 1086283 kubeadm.go:318] 
	I1013 14:08:58.622076 1086283 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1013 14:08:58.622080 1086283 kubeadm.go:318] 
	I1013 14:08:58.622181 1086283 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1013 14:08:58.622185 1086283 kubeadm.go:318] 
	I1013 14:08:58.622209 1086283 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1013 14:08:58.622256 1086283 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1013 14:08:58.622300 1086283 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1013 14:08:58.622302 1086283 kubeadm.go:318] 
	I1013 14:08:58.622346 1086283 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1013 14:08:58.622356 1086283 kubeadm.go:318] 
	I1013 14:08:58.622394 1086283 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1013 14:08:58.622397 1086283 kubeadm.go:318] 
	I1013 14:08:58.622448 1086283 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1013 14:08:58.622509 1086283 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1013 14:08:58.622561 1086283 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1013 14:08:58.622564 1086283 kubeadm.go:318] 
	I1013 14:08:58.622630 1086283 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1013 14:08:58.622695 1086283 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1013 14:08:58.622698 1086283 kubeadm.go:318] 
	I1013 14:08:58.622768 1086283 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token piyn5s.2kp6jsrawp1uyq9s \
	I1013 14:08:58.622860 1086283 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:16d9a7410241b2acfdff9ea6415bd20df136db6f360e1d41e81cf20406588c23 \
	I1013 14:08:58.622876 1086283 kubeadm.go:318] 	--control-plane 
	I1013 14:08:58.622878 1086283 kubeadm.go:318] 
	I1013 14:08:58.622947 1086283 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1013 14:08:58.622949 1086283 kubeadm.go:318] 
	I1013 14:08:58.623058 1086283 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token piyn5s.2kp6jsrawp1uyq9s \
	I1013 14:08:58.623232 1086283 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:16d9a7410241b2acfdff9ea6415bd20df136db6f360e1d41e81cf20406588c23 
	I1013 14:08:58.623240 1086283 cni.go:84] Creating CNI manager for ""
	I1013 14:08:58.623260 1086283 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1013 14:08:58.624426 1086283 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1013 14:08:58.625368 1086283 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1013 14:08:58.634115 1086283 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1013 14:08:58.647804 1086283 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1013 14:08:58.647859 1086283 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 14:08:58.647887 1086283 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes skaffold-600759 minikube.k8s.io/updated_at=2025_10_13T14_08_58_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=6d66ff63385795e7745a92b3d96cb54f5b977801 minikube.k8s.io/name=skaffold-600759 minikube.k8s.io/primary=true
	I1013 14:08:58.658142 1086283 ops.go:34] apiserver oom_adj: -16
	I1013 14:08:58.723450 1086283 kubeadm.go:1113] duration metric: took 75.631359ms to wait for elevateKubeSystemPrivileges
	I1013 14:08:58.739983 1086283 kubeadm.go:402] duration metric: took 9.343562645s to StartCluster
	I1013 14:08:58.740016 1086283 settings.go:142] acquiring lock: {Name:mk24de2af2bc4af7e814eea58e5a79fdffd1539a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 14:08:58.740123 1086283 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21724-845765/kubeconfig
	I1013 14:08:58.740820 1086283 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-845765/kubeconfig: {Name:mk457195fd43ec40c74fabe4f2e22723d064915b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 14:08:58.741018 1086283 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1013 14:08:58.741037 1086283 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1013 14:08:58.741100 1086283 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1013 14:08:58.741193 1086283 addons.go:69] Setting storage-provisioner=true in profile "skaffold-600759"
	I1013 14:08:58.741211 1086283 addons.go:238] Setting addon storage-provisioner=true in "skaffold-600759"
	I1013 14:08:58.741219 1086283 addons.go:69] Setting default-storageclass=true in profile "skaffold-600759"
	I1013 14:08:58.741233 1086283 config.go:182] Loaded profile config "skaffold-600759": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
	I1013 14:08:58.741243 1086283 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "skaffold-600759"
	I1013 14:08:58.741246 1086283 host.go:66] Checking if "skaffold-600759" exists ...
	I1013 14:08:58.741620 1086283 cli_runner.go:164] Run: docker container inspect skaffold-600759 --format={{.State.Status}}
	I1013 14:08:58.741744 1086283 cli_runner.go:164] Run: docker container inspect skaffold-600759 --format={{.State.Status}}
	I1013 14:08:58.742534 1086283 out.go:179] * Verifying Kubernetes components...
	I1013 14:08:58.743703 1086283 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1013 14:08:58.764989 1086283 addons.go:238] Setting addon default-storageclass=true in "skaffold-600759"
	I1013 14:08:58.765026 1086283 host.go:66] Checking if "skaffold-600759" exists ...
	I1013 14:08:58.765559 1086283 cli_runner.go:164] Run: docker container inspect skaffold-600759 --format={{.State.Status}}
	I1013 14:08:58.766157 1086283 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1013 14:08:58.767975 1086283 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1013 14:08:58.767987 1086283 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1013 14:08:58.768047 1086283 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-600759
	I1013 14:08:58.791205 1086283 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1013 14:08:58.791254 1086283 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1013 14:08:58.791347 1086283 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-600759
	I1013 14:08:58.800202 1086283 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33348 SSHKeyPath:/home/jenkins/minikube-integration/21724-845765/.minikube/machines/skaffold-600759/id_rsa Username:docker}
	I1013 14:08:58.814291 1086283 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33348 SSHKeyPath:/home/jenkins/minikube-integration/21724-845765/.minikube/machines/skaffold-600759/id_rsa Username:docker}
	I1013 14:08:58.834656 1086283 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1013 14:08:58.887414 1086283 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1013 14:08:58.919640 1086283 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1013 14:08:58.929049 1086283 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1013 14:08:59.015053 1086283 start.go:976] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1013 14:08:59.016008 1086283 api_server.go:52] waiting for apiserver process to appear ...
	I1013 14:08:59.016065 1086283 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1013 14:08:59.196149 1086283 api_server.go:72] duration metric: took 455.077315ms to wait for apiserver process to appear ...
	I1013 14:08:59.196166 1086283 api_server.go:88] waiting for apiserver healthz status ...
	I1013 14:08:59.196185 1086283 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1013 14:08:59.201750 1086283 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1013 14:08:59.202720 1086283 api_server.go:141] control plane version: v1.34.1
	I1013 14:08:59.202742 1086283 api_server.go:131] duration metric: took 6.570255ms to wait for apiserver health ...
	I1013 14:08:59.202751 1086283 system_pods.go:43] waiting for kube-system pods to appear ...
	I1013 14:08:59.203077 1086283 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1013 14:08:59.204222 1086283 addons.go:514] duration metric: took 463.127133ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1013 14:08:59.205323 1086283 system_pods.go:59] 5 kube-system pods found
	I1013 14:08:59.205347 1086283 system_pods.go:61] "etcd-skaffold-600759" [40779ea3-464f-4822-8adc-56ddb6a01424] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1013 14:08:59.205355 1086283 system_pods.go:61] "kube-apiserver-skaffold-600759" [1eb64043-99c7-4905-bb90-5f2737ddd669] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1013 14:08:59.205364 1086283 system_pods.go:61] "kube-controller-manager-skaffold-600759" [156fb531-9d0a-4cdc-a2de-0896c8417b0b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1013 14:08:59.205375 1086283 system_pods.go:61] "kube-scheduler-skaffold-600759" [c2c56e97-d185-426d-9140-6fdbee90fb4f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1013 14:08:59.205379 1086283 system_pods.go:61] "storage-provisioner" [a453fdce-cdf1-4d1d-a723-e4452a80c902] Pending
	I1013 14:08:59.205385 1086283 system_pods.go:74] duration metric: took 2.629482ms to wait for pod list to return data ...
	I1013 14:08:59.205396 1086283 kubeadm.go:586] duration metric: took 464.330959ms to wait for: map[apiserver:true system_pods:true]
	I1013 14:08:59.205407 1086283 node_conditions.go:102] verifying NodePressure condition ...
	I1013 14:08:59.207398 1086283 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1013 14:08:59.207412 1086283 node_conditions.go:123] node cpu capacity is 8
	I1013 14:08:59.207423 1086283 node_conditions.go:105] duration metric: took 2.012663ms to run NodePressure ...
	I1013 14:08:59.207441 1086283 start.go:241] waiting for startup goroutines ...
	I1013 14:08:59.519194 1086283 kapi.go:214] "coredns" deployment in "kube-system" namespace and "skaffold-600759" context rescaled to 1 replicas
	I1013 14:08:59.519222 1086283 start.go:246] waiting for cluster config update ...
	I1013 14:08:59.519231 1086283 start.go:255] writing updated cluster config ...
	I1013 14:08:59.519521 1086283 ssh_runner.go:195] Run: rm -f paused
	I1013 14:08:59.568909 1086283 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1013 14:08:59.570714 1086283 out.go:179] * Done! kubectl is now configured to use "skaffold-600759" cluster and "default" namespace by default
	
	
	==> Docker <==
	Oct 13 14:08:46 skaffold-600759 dockerd[1051]: time="2025-10-13T14:08:46.390358603Z" level=info msg="API listen on /var/run/docker.sock"
	Oct 13 14:08:46 skaffold-600759 dockerd[1051]: time="2025-10-13T14:08:46.390364508Z" level=info msg="API listen on /run/docker.sock"
	Oct 13 14:08:46 skaffold-600759 systemd[1]: Started docker.service - Docker Application Container Engine.
	Oct 13 14:08:46 skaffold-600759 systemd[1]: Starting cri-docker.service - CRI Interface for Docker Application Container Engine...
	Oct 13 14:08:46 skaffold-600759 cri-dockerd[1360]: time="2025-10-13T14:08:46Z" level=info msg="Starting cri-dockerd dev (HEAD)"
	Oct 13 14:08:46 skaffold-600759 cri-dockerd[1360]: time="2025-10-13T14:08:46Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	Oct 13 14:08:46 skaffold-600759 cri-dockerd[1360]: time="2025-10-13T14:08:46Z" level=info msg="Start docker client with request timeout 0s"
	Oct 13 14:08:46 skaffold-600759 cri-dockerd[1360]: time="2025-10-13T14:08:46Z" level=info msg="Hairpin mode is set to hairpin-veth"
	Oct 13 14:08:46 skaffold-600759 cri-dockerd[1360]: time="2025-10-13T14:08:46Z" level=info msg="Loaded network plugin cni"
	Oct 13 14:08:46 skaffold-600759 cri-dockerd[1360]: time="2025-10-13T14:08:46Z" level=info msg="Docker cri networking managed by network plugin cni"
	Oct 13 14:08:46 skaffold-600759 cri-dockerd[1360]: time="2025-10-13T14:08:46Z" level=info msg="Setting cgroupDriver systemd"
	Oct 13 14:08:46 skaffold-600759 cri-dockerd[1360]: time="2025-10-13T14:08:46Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	Oct 13 14:08:46 skaffold-600759 cri-dockerd[1360]: time="2025-10-13T14:08:46Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	Oct 13 14:08:46 skaffold-600759 cri-dockerd[1360]: time="2025-10-13T14:08:46Z" level=info msg="Start cri-dockerd grpc backend"
	Oct 13 14:08:46 skaffold-600759 systemd[1]: Started cri-docker.service - CRI Interface for Docker Application Container Engine.
	Oct 13 14:08:54 skaffold-600759 cri-dockerd[1360]: time="2025-10-13T14:08:54Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/06711b21443f3e7f75ec83e4b950369e5e3278dc1b769cff42cadba987eb3cd9/resolv.conf as [nameserver 192.168.76.1 search local europe-west4-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options edns0 trust-ad ndots:0]"
	Oct 13 14:08:54 skaffold-600759 cri-dockerd[1360]: time="2025-10-13T14:08:54Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/0ada0908093382492f69da54fba2eb9e47350751af1cb906d6ffc221fc12ed83/resolv.conf as [nameserver 192.168.76.1 search local europe-west4-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options edns0 trust-ad ndots:0]"
	Oct 13 14:08:54 skaffold-600759 cri-dockerd[1360]: time="2025-10-13T14:08:54Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/97e4b6282c162a368eda1fc8a71586c70983e35e7348a2e503073974177261a9/resolv.conf as [nameserver 192.168.76.1 search local europe-west4-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options ndots:0 edns0 trust-ad]"
	Oct 13 14:08:54 skaffold-600759 cri-dockerd[1360]: time="2025-10-13T14:08:54Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/21cd502124f3eb46d78bc73e590e4b8459c33bd93154a0bdff8367eea17e1b16/resolv.conf as [nameserver 192.168.76.1 search local europe-west4-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options edns0 trust-ad ndots:0]"
	Oct 13 14:09:04 skaffold-600759 cri-dockerd[1360]: time="2025-10-13T14:09:04Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/f6b12fae8865d7d86c65ce21a6249c1828d7d0319f0b9365aab98a8f773891cc/resolv.conf as [nameserver 192.168.76.1 search local europe-west4-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options edns0 trust-ad ndots:0]"
	Oct 13 14:09:04 skaffold-600759 cri-dockerd[1360]: time="2025-10-13T14:09:04Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/21fdebb22a254ab52aafd31d33b117a63b59011b6b494e5da2e04ea6b324e135/resolv.conf as [nameserver 192.168.76.1 search local europe-west4-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options ndots:0 edns0 trust-ad]"
	Oct 13 14:09:04 skaffold-600759 cri-dockerd[1360]: time="2025-10-13T14:09:04Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/bedd2ec989836bef730a523fe5327090245635aad785ffac25d05c3a1b0028d5/resolv.conf as [nameserver 192.168.76.1 search local europe-west4-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options edns0 trust-ad ndots:0]"
	Oct 13 14:09:05 skaffold-600759 dockerd[1051]: time="2025-10-13T14:09:05.220281860Z" level=info msg="Layer sha256:93bb432fc635ff65b22d8fd06065779d21d54079752b73e679b66e22eb809875 cleaned up"
	Oct 13 14:09:05 skaffold-600759 dockerd[1051]: time="2025-10-13T14:09:05.254227070Z" level=info msg="Layer sha256:93bb432fc635ff65b22d8fd06065779d21d54079752b73e679b66e22eb809875 cleaned up"
	Oct 13 14:09:06 skaffold-600759 dockerd[1051]: time="2025-10-13T14:09:06.411063046Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your pull rate limit as 'minikubebot': dckr_jti_W89jo-sMmu2ZeG4U1lTVn5LowXk=. You may increase the limit by upgrading. https://www.docker.com/increase-rate-limit"
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                       NAMESPACE
	bcfcab7ad779b       52546a367cc9e       3 seconds ago       Running             coredns                   0                   bedd2ec989836       coredns-66bc5c9577-gzcdr                  kube-system
	68a7639d0dd49       6e38f40d628db       3 seconds ago       Running             storage-provisioner       0                   21fdebb22a254       storage-provisioner                       kube-system
	aefcb6ad4091a       fc25172553d79       3 seconds ago       Running             kube-proxy                0                   f6b12fae8865d       kube-proxy-g29j8                          kube-system
	d1735c532b549       c3994bc696102       13 seconds ago      Running             kube-apiserver            0                   97e4b6282c162       kube-apiserver-skaffold-600759            kube-system
	80ad111ab1f84       7dd6aaa1717ab       13 seconds ago      Running             kube-scheduler            0                   21cd502124f3e       kube-scheduler-skaffold-600759            kube-system
	9c2bb3b038bb1       5f1f5298c888d       13 seconds ago      Running             etcd                      0                   0ada090809338       etcd-skaffold-600759                      kube-system
	0e5e0ebf97f9c       c80c8dbafe7dd       13 seconds ago      Running             kube-controller-manager   0                   06711b21443f3       kube-controller-manager-skaffold-600759   kube-system
	
	
	==> coredns [bcfcab7ad779] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	
	
	==> describe nodes <==
	Name:               skaffold-600759
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=skaffold-600759
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6d66ff63385795e7745a92b3d96cb54f5b977801
	                    minikube.k8s.io/name=skaffold-600759
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_13T14_08_58_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 13 Oct 2025 14:08:55 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  skaffold-600759
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 13 Oct 2025 14:08:57 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 13 Oct 2025 14:09:01 +0000   Mon, 13 Oct 2025 14:08:55 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 13 Oct 2025 14:09:01 +0000   Mon, 13 Oct 2025 14:08:55 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 13 Oct 2025 14:09:01 +0000   Mon, 13 Oct 2025 14:08:55 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 13 Oct 2025 14:09:01 +0000   Mon, 13 Oct 2025 14:09:01 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    skaffold-600759
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863448Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863448Ki
	  pods:               110
	System Info:
	  Machine ID:                 ae5563d544f246dfb2debce30ea7e52f
	  System UUID:                2721505f-91c0-410c-83f1-ad2dac5d9d90
	  Boot ID:                    11a94ccc-a4cf-476c-b883-d77264fdee8f
	  Kernel Version:             6.8.0-1041-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://28.5.0
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                       ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-gzcdr                   100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     4s
	  kube-system                 etcd-skaffold-600759                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         10s
	  kube-system                 kube-apiserver-skaffold-600759             250m (3%)     0 (0%)      0 (0%)           0 (0%)         10s
	  kube-system                 kube-controller-manager-skaffold-600759    200m (2%)     0 (0%)      0 (0%)           0 (0%)         10s
	  kube-system                 kube-proxy-g29j8                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         4s
	  kube-system                 kube-scheduler-skaffold-600759             100m (1%)     0 (0%)      0 (0%)           0 (0%)         10s
	  kube-system                 storage-provisioner                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         8s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   0 (0%)
	  memory             170Mi (0%)  170Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 2s                 kube-proxy       
	  Normal  Starting                 14s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  14s (x8 over 14s)  kubelet          Node skaffold-600759 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    14s (x8 over 14s)  kubelet          Node skaffold-600759 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     14s (x7 over 14s)  kubelet          Node skaffold-600759 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  14s                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 10s                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  10s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  10s                kubelet          Node skaffold-600759 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10s                kubelet          Node skaffold-600759 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10s                kubelet          Node skaffold-600759 status is now: NodeHasSufficientPID
	  Normal  NodeReady                6s                 kubelet          Node skaffold-600759 status is now: NodeReady
	  Normal  RegisteredNode           5s                 node-controller  Node skaffold-600759 event: Registered Node skaffold-600759 in Controller
	
	
	==> dmesg <==
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 86 39 35 f3 64 4d 08 06
	[  +0.000647] IPv4: martian source 10.244.0.31 from 10.244.0.7, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 4e 55 e3 76 26 92 08 06
	[  +9.892718] IPv4: martian source 10.244.0.32 from 10.244.0.25, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 86 2a 99 64 46 3d 08 06
	[Oct13 13:40] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 8a b5 fe ff b2 ae 08 06
	[Oct13 13:41] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 26 db 2e 1c c1 c4 08 06
	[Oct13 13:42] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff da 65 0d 99 f7 7a 08 06
	[ +32.969545] IPv4: martian source 10.244.0.1 from 10.244.0.8, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff fa 4a b8 fd bd a0 08 06
	[Oct13 13:55] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff ca 58 c0 08 14 66 08 06
	[Oct13 13:58] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 62 77 d6 2c 21 9b 08 06
	[Oct13 14:05] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 86 3d ac a7 6f 32 08 06
	[Oct13 14:06] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 8a c4 7c 38 f9 7e 08 06
	[Oct13 14:07] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 1a 46 2f 41 9a a9 08 06
	[Oct13 14:09] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff f6 17 07 d9 db f9 08 06
	
	
	==> etcd [9c2bb3b038bb] <==
	{"level":"warn","ts":"2025-10-13T14:08:55.115742Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36654","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T14:08:55.123232Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36700","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T14:08:55.129181Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36704","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T14:08:55.135059Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36728","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T14:08:55.141139Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36744","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T14:08:55.147718Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36768","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T14:08:55.154739Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36788","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T14:08:55.160823Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36812","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T14:08:55.167743Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36840","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T14:08:55.174160Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36868","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T14:08:55.180875Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36876","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T14:08:55.188210Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36894","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T14:08:55.195392Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36912","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T14:08:55.217128Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36934","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T14:08:55.223262Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36966","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T14:08:55.230453Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36970","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T14:08:55.278964Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36994","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-13T14:09:03.779527Z","caller":"traceutil/trace.go:172","msg":"trace[2119658154] transaction","detail":"{read_only:false; response_revision:371; number_of_response:1; }","duration":"131.256101ms","start":"2025-10-13T14:09:03.648252Z","end":"2025-10-13T14:09:03.779508Z","steps":["trace[2119658154] 'process raft request'  (duration: 131.121046ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-13T14:09:03.779533Z","caller":"traceutil/trace.go:172","msg":"trace[1645608779] transaction","detail":"{read_only:false; response_revision:370; number_of_response:1; }","duration":"131.433097ms","start":"2025-10-13T14:09:03.648064Z","end":"2025-10-13T14:09:03.779497Z","steps":["trace[1645608779] 'process raft request'  (duration: 106.220281ms)","trace[1645608779] 'compare'  (duration: 24.940003ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-13T14:09:03.942198Z","caller":"traceutil/trace.go:172","msg":"trace[2127528773] linearizableReadLoop","detail":"{readStateIndex:382; appliedIndex:382; }","duration":"152.84238ms","start":"2025-10-13T14:09:03.789321Z","end":"2025-10-13T14:09:03.942164Z","steps":["trace[2127528773] 'read index received'  (duration: 152.832535ms)","trace[2127528773] 'applied index is now lower than readState.Index'  (duration: 8.786µs)"],"step_count":2}
	{"level":"warn","ts":"2025-10-13T14:09:03.945877Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"156.52648ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/bootstrap-signer\" limit:1 ","response":"range_response_count:1 size:197"}
	{"level":"info","ts":"2025-10-13T14:09:03.945937Z","caller":"traceutil/trace.go:172","msg":"trace[1607692199] transaction","detail":"{read_only:false; response_revision:372; number_of_response:1; }","duration":"159.2917ms","start":"2025-10-13T14:09:03.786629Z","end":"2025-10-13T14:09:03.945921Z","steps":["trace[1607692199] 'process raft request'  (duration: 155.679363ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-13T14:09:03.945964Z","caller":"traceutil/trace.go:172","msg":"trace[1249664331] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/bootstrap-signer; range_end:; response_count:1; response_revision:371; }","duration":"156.640661ms","start":"2025-10-13T14:09:03.789312Z","end":"2025-10-13T14:09:03.945953Z","steps":["trace[1249664331] 'agreement among raft nodes before linearized reading'  (duration: 152.93734ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-13T14:09:03.946009Z","caller":"traceutil/trace.go:172","msg":"trace[76491729] transaction","detail":"{read_only:false; response_revision:374; number_of_response:1; }","duration":"158.444253ms","start":"2025-10-13T14:09:03.787549Z","end":"2025-10-13T14:09:03.945994Z","steps":["trace[76491729] 'process raft request'  (duration: 158.394947ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-13T14:09:03.946053Z","caller":"traceutil/trace.go:172","msg":"trace[1405593765] transaction","detail":"{read_only:false; response_revision:373; number_of_response:1; }","duration":"159.312938ms","start":"2025-10-13T14:09:03.786717Z","end":"2025-10-13T14:09:03.946030Z","steps":["trace[1405593765] 'process raft request'  (duration: 159.158754ms)"],"step_count":1}
	
	
	==> kernel <==
	 14:09:07 up  6:51,  0 user,  load average: 1.03, 1.25, 8.79
	Linux skaffold-600759 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kube-apiserver [d1735c532b54] <==
	I1013 14:08:55.777504       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1013 14:08:55.777539       1 default_servicecidr_controller.go:166] Creating default ServiceCIDR with CIDRs: [10.96.0.0/12]
	I1013 14:08:55.777649       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1013 14:08:55.782403       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1013 14:08:55.782821       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1013 14:08:55.789279       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1013 14:08:55.789419       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1013 14:08:55.944560       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1013 14:08:56.658426       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1013 14:08:56.662889       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1013 14:08:56.662908       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1013 14:08:57.110369       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1013 14:08:57.143543       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1013 14:08:57.270898       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1013 14:08:57.276909       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1013 14:08:57.277807       1 controller.go:667] quota admission added evaluator for: endpoints
	I1013 14:08:57.281456       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1013 14:08:57.694382       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1013 14:08:58.012779       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1013 14:08:58.020664       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1013 14:08:58.027156       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1013 14:09:02.947710       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1013 14:09:02.951309       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1013 14:09:03.394127       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1013 14:09:03.786172       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [0e5e0ebf97f9] <==
	I1013 14:09:02.692336       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1013 14:09:02.692315       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1013 14:09:02.692436       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1013 14:09:02.692372       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1013 14:09:02.692785       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1013 14:09:02.694229       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1013 14:09:02.694319       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1013 14:09:02.696690       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1013 14:09:02.696720       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1013 14:09:02.696819       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1013 14:09:02.696872       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1013 14:09:02.696883       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1013 14:09:02.696890       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1013 14:09:02.697012       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1013 14:09:02.697082       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1013 14:09:02.698963       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1013 14:09:02.699000       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1013 14:09:02.700156       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1013 14:09:02.700187       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1013 14:09:02.703478       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1013 14:09:02.706823       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="skaffold-600759" podCIDRs=["10.244.0.0/24"]
	I1013 14:09:02.709774       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1013 14:09:02.717414       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1013 14:09:02.720675       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1013 14:09:02.723979       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	
	
	==> kube-proxy [aefcb6ad4091] <==
	I1013 14:09:04.272885       1 server_linux.go:53] "Using iptables proxy"
	I1013 14:09:04.332484       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1013 14:09:04.432965       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1013 14:09:04.433006       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1013 14:09:04.433124       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1013 14:09:04.455616       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1013 14:09:04.455681       1 server_linux.go:132] "Using iptables Proxier"
	I1013 14:09:04.463227       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1013 14:09:04.463636       1 server.go:527] "Version info" version="v1.34.1"
	I1013 14:09:04.463664       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1013 14:09:04.465292       1 config.go:200] "Starting service config controller"
	I1013 14:09:04.465310       1 config.go:106] "Starting endpoint slice config controller"
	I1013 14:09:04.465328       1 config.go:403] "Starting serviceCIDR config controller"
	I1013 14:09:04.465341       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1013 14:09:04.465350       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1013 14:09:04.465338       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1013 14:09:04.465507       1 config.go:309] "Starting node config controller"
	I1013 14:09:04.465519       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1013 14:09:04.565586       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1013 14:09:04.565582       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1013 14:09:04.565614       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1013 14:09:04.565642       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [80ad111ab1f8] <==
	E1013 14:08:55.704486       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1013 14:08:55.704511       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1013 14:08:55.704600       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1013 14:08:55.704611       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1013 14:08:55.704721       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1013 14:08:55.704720       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1013 14:08:55.704624       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1013 14:08:55.704856       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1013 14:08:55.704858       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1013 14:08:55.704910       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1013 14:08:56.509217       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1013 14:08:56.519253       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1013 14:08:56.577496       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1013 14:08:56.599679       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1013 14:08:56.670061       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1013 14:08:56.681143       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1013 14:08:56.724687       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1013 14:08:56.735773       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1013 14:08:56.754862       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1013 14:08:56.755815       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1013 14:08:56.813566       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1013 14:08:56.853714       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1013 14:08:56.912034       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1013 14:08:56.955179       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	I1013 14:08:58.801880       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 13 14:08:58 skaffold-600759 kubelet[2252]: I1013 14:08:58.897282    2252 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-skaffold-600759"
	Oct 13 14:08:58 skaffold-600759 kubelet[2252]: I1013 14:08:58.897802    2252 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-skaffold-600759"
	Oct 13 14:08:58 skaffold-600759 kubelet[2252]: E1013 14:08:58.907525    2252 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-skaffold-600759\" already exists" pod="kube-system/kube-apiserver-skaffold-600759"
	Oct 13 14:08:58 skaffold-600759 kubelet[2252]: E1013 14:08:58.909154    2252 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-skaffold-600759\" already exists" pod="kube-system/kube-scheduler-skaffold-600759"
	Oct 13 14:08:58 skaffold-600759 kubelet[2252]: E1013 14:08:58.909401    2252 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-skaffold-600759\" already exists" pod="kube-system/kube-controller-manager-skaffold-600759"
	Oct 13 14:08:58 skaffold-600759 kubelet[2252]: E1013 14:08:58.909427    2252 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-skaffold-600759\" already exists" pod="kube-system/etcd-skaffold-600759"
	Oct 13 14:08:58 skaffold-600759 kubelet[2252]: I1013 14:08:58.920214    2252 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-skaffold-600759" podStartSLOduration=1.920196177 podStartE2EDuration="1.920196177s" podCreationTimestamp="2025-10-13 14:08:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 14:08:58.920041192 +0000 UTC m=+1.146198690" watchObservedRunningTime="2025-10-13 14:08:58.920196177 +0000 UTC m=+1.146353659"
	Oct 13 14:08:58 skaffold-600759 kubelet[2252]: I1013 14:08:58.936730    2252 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-skaffold-600759" podStartSLOduration=1.9367093199999998 podStartE2EDuration="1.93670932s" podCreationTimestamp="2025-10-13 14:08:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 14:08:58.92799753 +0000 UTC m=+1.154155027" watchObservedRunningTime="2025-10-13 14:08:58.93670932 +0000 UTC m=+1.162866821"
	Oct 13 14:08:58 skaffold-600759 kubelet[2252]: I1013 14:08:58.948620    2252 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-skaffold-600759" podStartSLOduration=1.948601117 podStartE2EDuration="1.948601117s" podCreationTimestamp="2025-10-13 14:08:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 14:08:58.937454107 +0000 UTC m=+1.163611607" watchObservedRunningTime="2025-10-13 14:08:58.948601117 +0000 UTC m=+1.174758607"
	Oct 13 14:08:58 skaffold-600759 kubelet[2252]: I1013 14:08:58.948718    2252 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-skaffold-600759" podStartSLOduration=1.948714057 podStartE2EDuration="1.948714057s" podCreationTimestamp="2025-10-13 14:08:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 14:08:58.948389242 +0000 UTC m=+1.174546741" watchObservedRunningTime="2025-10-13 14:08:58.948714057 +0000 UTC m=+1.174871561"
	Oct 13 14:09:01 skaffold-600759 kubelet[2252]: I1013 14:09:01.974015    2252 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Oct 13 14:09:02 skaffold-600759 kubelet[2252]: I1013 14:09:02.774994    2252 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hwwqf\" (UniqueName: \"kubernetes.io/projected/a453fdce-cdf1-4d1d-a723-e4452a80c902-kube-api-access-hwwqf\") pod \"storage-provisioner\" (UID: \"a453fdce-cdf1-4d1d-a723-e4452a80c902\") " pod="kube-system/storage-provisioner"
	Oct 13 14:09:02 skaffold-600759 kubelet[2252]: I1013 14:09:02.775038    2252 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/a453fdce-cdf1-4d1d-a723-e4452a80c902-tmp\") pod \"storage-provisioner\" (UID: \"a453fdce-cdf1-4d1d-a723-e4452a80c902\") " pod="kube-system/storage-provisioner"
	Oct 13 14:09:02 skaffold-600759 kubelet[2252]: E1013 14:09:02.881616    2252 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Oct 13 14:09:02 skaffold-600759 kubelet[2252]: E1013 14:09:02.881652    2252 projected.go:196] Error preparing data for projected volume kube-api-access-hwwqf for pod kube-system/storage-provisioner: configmap "kube-root-ca.crt" not found
	Oct 13 14:09:02 skaffold-600759 kubelet[2252]: E1013 14:09:02.881751    2252 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a453fdce-cdf1-4d1d-a723-e4452a80c902-kube-api-access-hwwqf podName:a453fdce-cdf1-4d1d-a723-e4452a80c902 nodeName:}" failed. No retries permitted until 2025-10-13 14:09:03.381720751 +0000 UTC m=+5.607878243 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-hwwqf" (UniqueName: "kubernetes.io/projected/a453fdce-cdf1-4d1d-a723-e4452a80c902-kube-api-access-hwwqf") pod "storage-provisioner" (UID: "a453fdce-cdf1-4d1d-a723-e4452a80c902") : configmap "kube-root-ca.crt" not found
	Oct 13 14:09:03 skaffold-600759 kubelet[2252]: I1013 14:09:03.479973    2252 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/b05dce05-44fa-4f30-99c0-2c28f61c280f-kube-proxy\") pod \"kube-proxy-g29j8\" (UID: \"b05dce05-44fa-4f30-99c0-2c28f61c280f\") " pod="kube-system/kube-proxy-g29j8"
	Oct 13 14:09:03 skaffold-600759 kubelet[2252]: I1013 14:09:03.480017    2252 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b05dce05-44fa-4f30-99c0-2c28f61c280f-xtables-lock\") pod \"kube-proxy-g29j8\" (UID: \"b05dce05-44fa-4f30-99c0-2c28f61c280f\") " pod="kube-system/kube-proxy-g29j8"
	Oct 13 14:09:03 skaffold-600759 kubelet[2252]: I1013 14:09:03.480041    2252 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gbvvt\" (UniqueName: \"kubernetes.io/projected/b05dce05-44fa-4f30-99c0-2c28f61c280f-kube-api-access-gbvvt\") pod \"kube-proxy-g29j8\" (UID: \"b05dce05-44fa-4f30-99c0-2c28f61c280f\") " pod="kube-system/kube-proxy-g29j8"
	Oct 13 14:09:03 skaffold-600759 kubelet[2252]: I1013 14:09:03.480070    2252 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b05dce05-44fa-4f30-99c0-2c28f61c280f-lib-modules\") pod \"kube-proxy-g29j8\" (UID: \"b05dce05-44fa-4f30-99c0-2c28f61c280f\") " pod="kube-system/kube-proxy-g29j8"
	Oct 13 14:09:04 skaffold-600759 kubelet[2252]: I1013 14:09:04.083650    2252 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lmzpl\" (UniqueName: \"kubernetes.io/projected/aa5cf1e1-9802-417f-8d07-d304450b9e93-kube-api-access-lmzpl\") pod \"coredns-66bc5c9577-gzcdr\" (UID: \"aa5cf1e1-9802-417f-8d07-d304450b9e93\") " pod="kube-system/coredns-66bc5c9577-gzcdr"
	Oct 13 14:09:04 skaffold-600759 kubelet[2252]: I1013 14:09:04.083745    2252 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/aa5cf1e1-9802-417f-8d07-d304450b9e93-config-volume\") pod \"coredns-66bc5c9577-gzcdr\" (UID: \"aa5cf1e1-9802-417f-8d07-d304450b9e93\") " pod="kube-system/coredns-66bc5c9577-gzcdr"
	Oct 13 14:09:04 skaffold-600759 kubelet[2252]: I1013 14:09:04.954198    2252 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=5.954175776 podStartE2EDuration="5.954175776s" podCreationTimestamp="2025-10-13 14:08:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 14:09:04.94304504 +0000 UTC m=+7.169202541" watchObservedRunningTime="2025-10-13 14:09:04.954175776 +0000 UTC m=+7.180333276"
	Oct 13 14:09:04 skaffold-600759 kubelet[2252]: I1013 14:09:04.967364    2252 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-g29j8" podStartSLOduration=1.967341644 podStartE2EDuration="1.967341644s" podCreationTimestamp="2025-10-13 14:09:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 14:09:04.954363616 +0000 UTC m=+7.180521111" watchObservedRunningTime="2025-10-13 14:09:04.967341644 +0000 UTC m=+7.193499145"
	Oct 13 14:09:04 skaffold-600759 kubelet[2252]: I1013 14:09:04.976931    2252 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-gzcdr" podStartSLOduration=1.9769098550000002 podStartE2EDuration="1.976909855s" podCreationTimestamp="2025-10-13 14:09:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 14:09:04.967692532 +0000 UTC m=+7.193850027" watchObservedRunningTime="2025-10-13 14:09:04.976909855 +0000 UTC m=+7.203067354"
	
	
	==> storage-provisioner [68a7639d0dd4] <==
	I1013 14:09:04.226534       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p skaffold-600759 -n skaffold-600759
helpers_test.go:269: (dbg) Run:  kubectl --context skaffold-600759 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestSkaffold FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "skaffold-600759" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p skaffold-600759
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p skaffold-600759: (2.179644217s)
--- FAIL: TestSkaffold (37.44s)

                                                
                                    

Test pass (324/347)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 31.73
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.06
9 TestDownloadOnly/v1.28.0/DeleteAll 0.22
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.14
12 TestDownloadOnly/v1.34.1/json-events 9.98
13 TestDownloadOnly/v1.34.1/preload-exists 0
17 TestDownloadOnly/v1.34.1/LogsDuration 0.06
18 TestDownloadOnly/v1.34.1/DeleteAll 0.21
19 TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds 0.14
20 TestDownloadOnlyKic 0.38
21 TestBinaryMirror 0.88
22 TestOffline 77.43
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
27 TestAddons/Setup 128.38
29 TestAddons/serial/Volcano 41.15
31 TestAddons/serial/GCPAuth/Namespaces 0.12
32 TestAddons/serial/GCPAuth/FakeCredentials 9.48
35 TestAddons/parallel/Registry 15.05
36 TestAddons/parallel/RegistryCreds 0.67
37 TestAddons/parallel/Ingress 21.41
38 TestAddons/parallel/InspektorGadget 5.22
39 TestAddons/parallel/MetricsServer 5.61
41 TestAddons/parallel/CSI 51.35
42 TestAddons/parallel/Headlamp 17.34
43 TestAddons/parallel/CloudSpanner 6.45
44 TestAddons/parallel/LocalPath 54.6
45 TestAddons/parallel/NvidiaDevicePlugin 6.43
46 TestAddons/parallel/Yakd 10.66
47 TestAddons/parallel/AmdGpuDevicePlugin 6.47
48 TestAddons/StoppedEnableDisable 11.21
49 TestCertOptions 27.95
50 TestCertExpiration 248.37
51 TestDockerFlags 35.3
52 TestForceSystemdFlag 42.11
53 TestForceSystemdEnv 31.11
55 TestKVMDriverInstallOrUpdate 0.9
59 TestErrorSpam/setup 21.57
60 TestErrorSpam/start 0.64
61 TestErrorSpam/status 0.94
62 TestErrorSpam/pause 1.25
63 TestErrorSpam/unpause 1.35
64 TestErrorSpam/stop 10.93
67 TestFunctional/serial/CopySyncFile 0
68 TestFunctional/serial/StartWithProxy 59.87
69 TestFunctional/serial/AuditLog 0
70 TestFunctional/serial/SoftStart 50.72
71 TestFunctional/serial/KubeContext 0.05
72 TestFunctional/serial/KubectlGetPods 0.06
75 TestFunctional/serial/CacheCmd/cache/add_remote 2.39
76 TestFunctional/serial/CacheCmd/cache/add_local 1.5
77 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
78 TestFunctional/serial/CacheCmd/cache/list 0.05
79 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.28
80 TestFunctional/serial/CacheCmd/cache/cache_reload 1.31
81 TestFunctional/serial/CacheCmd/cache/delete 0.1
82 TestFunctional/serial/MinikubeKubectlCmd 0.11
83 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.1
84 TestFunctional/serial/ExtraConfig 51.55
85 TestFunctional/serial/ComponentHealth 0.06
86 TestFunctional/serial/LogsCmd 1
87 TestFunctional/serial/LogsFileCmd 1.02
88 TestFunctional/serial/InvalidService 3.98
90 TestFunctional/parallel/ConfigCmd 0.38
91 TestFunctional/parallel/DashboardCmd 11.5
92 TestFunctional/parallel/DryRun 0.41
93 TestFunctional/parallel/InternationalLanguage 0.18
94 TestFunctional/parallel/StatusCmd 1.06
98 TestFunctional/parallel/ServiceCmdConnect 13.77
99 TestFunctional/parallel/AddonsCmd 0.15
100 TestFunctional/parallel/PersistentVolumeClaim 41.15
102 TestFunctional/parallel/SSHCmd 0.62
103 TestFunctional/parallel/CpCmd 1.87
104 TestFunctional/parallel/MySQL 22.69
105 TestFunctional/parallel/FileSync 0.27
106 TestFunctional/parallel/CertSync 1.62
110 TestFunctional/parallel/NodeLabels 0.07
112 TestFunctional/parallel/NonActiveRuntimeDisabled 0.27
114 TestFunctional/parallel/License 0.37
115 TestFunctional/parallel/ServiceCmd/DeployApp 8.2
116 TestFunctional/parallel/ProfileCmd/profile_not_create 0.51
117 TestFunctional/parallel/MountCmd/any-port 7.71
118 TestFunctional/parallel/ProfileCmd/profile_list 0.44
119 TestFunctional/parallel/ProfileCmd/profile_json_output 0.44
120 TestFunctional/parallel/MountCmd/specific-port 2.13
121 TestFunctional/parallel/ServiceCmd/List 0.57
122 TestFunctional/parallel/ServiceCmd/JSONOutput 0.64
123 TestFunctional/parallel/ServiceCmd/HTTPS 0.42
124 TestFunctional/parallel/ServiceCmd/Format 0.43
125 TestFunctional/parallel/MountCmd/VerifyCleanup 2.24
126 TestFunctional/parallel/ServiceCmd/URL 0.49
128 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.56
129 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
131 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 14.24
132 TestFunctional/parallel/Version/short 0.07
133 TestFunctional/parallel/Version/components 0.6
134 TestFunctional/parallel/ImageCommands/ImageListShort 0.27
135 TestFunctional/parallel/ImageCommands/ImageListTable 0.27
136 TestFunctional/parallel/ImageCommands/ImageListJson 0.28
137 TestFunctional/parallel/ImageCommands/ImageListYaml 0.26
138 TestFunctional/parallel/ImageCommands/ImageBuild 4.6
139 TestFunctional/parallel/ImageCommands/Setup 1.98
140 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.06
141 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.92
142 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.74
143 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.32
144 TestFunctional/parallel/ImageCommands/ImageRemove 0.49
145 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.58
146 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.36
147 TestFunctional/parallel/DockerEnv/bash 0.97
148 TestFunctional/parallel/UpdateContextCmd/no_changes 0.18
149 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.16
150 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.16
151 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.08
152 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
156 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
157 TestFunctional/delete_echo-server_images 0.04
158 TestFunctional/delete_my-image_image 0.02
159 TestFunctional/delete_minikube_cached_images 0.02
164 TestMultiControlPlane/serial/StartCluster 159.85
165 TestMultiControlPlane/serial/DeployApp 5.59
166 TestMultiControlPlane/serial/PingHostFromPods 1.16
167 TestMultiControlPlane/serial/AddWorkerNode 35.65
168 TestMultiControlPlane/serial/NodeLabels 0.07
169 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.94
170 TestMultiControlPlane/serial/CopyFile 17.58
171 TestMultiControlPlane/serial/StopSecondaryNode 11.51
172 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.74
173 TestMultiControlPlane/serial/RestartSecondaryNode 38.31
174 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.95
175 TestMultiControlPlane/serial/RestartClusterKeepsNodes 177.65
176 TestMultiControlPlane/serial/DeleteSecondaryNode 9.56
177 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.7
178 TestMultiControlPlane/serial/StopCluster 32.32
179 TestMultiControlPlane/serial/RestartCluster 106.1
180 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.73
181 TestMultiControlPlane/serial/AddSecondaryNode 41.53
182 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.93
185 TestImageBuild/serial/Setup 26.22
186 TestImageBuild/serial/NormalBuild 1.08
187 TestImageBuild/serial/BuildWithBuildArg 0.66
188 TestImageBuild/serial/BuildWithDockerIgnore 0.46
189 TestImageBuild/serial/BuildWithSpecifiedDockerfile 0.48
193 TestJSONOutput/start/Command 62.35
194 TestJSONOutput/start/Audit 0
196 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
197 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
199 TestJSONOutput/pause/Command 0.5
200 TestJSONOutput/pause/Audit 0
202 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
203 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
205 TestJSONOutput/unpause/Command 0.47
206 TestJSONOutput/unpause/Audit 0
208 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
209 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
211 TestJSONOutput/stop/Command 5.75
212 TestJSONOutput/stop/Audit 0
214 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
215 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
216 TestErrorJSONOutput 0.22
218 TestKicCustomNetwork/create_custom_network 24.43
219 TestKicCustomNetwork/use_default_bridge_network 24.05
220 TestKicExistingNetwork 24.91
221 TestKicCustomSubnet 24.19
222 TestKicStaticIP 26.98
223 TestMainNoArgs 0.05
224 TestMinikubeProfile 52.28
227 TestMountStart/serial/StartWithMountFirst 9.03
228 TestMountStart/serial/VerifyMountFirst 0.29
229 TestMountStart/serial/StartWithMountSecond 11.52
230 TestMountStart/serial/VerifyMountSecond 0.29
231 TestMountStart/serial/DeleteFirst 1.55
232 TestMountStart/serial/VerifyMountPostDelete 0.28
233 TestMountStart/serial/Stop 1.21
234 TestMountStart/serial/RestartStopped 9.33
235 TestMountStart/serial/VerifyMountPostStop 0.29
238 TestMultiNode/serial/FreshStart2Nodes 91.05
239 TestMultiNode/serial/DeployApp2Nodes 4.16
240 TestMultiNode/serial/PingHostFrom2Pods 0.82
241 TestMultiNode/serial/AddNode 32
242 TestMultiNode/serial/MultiNodeLabels 0.07
243 TestMultiNode/serial/ProfileList 0.71
244 TestMultiNode/serial/CopyFile 10.03
245 TestMultiNode/serial/StopNode 2.24
246 TestMultiNode/serial/StartAfterStop 8.75
247 TestMultiNode/serial/RestartKeepsNodes 79.8
248 TestMultiNode/serial/DeleteNode 5.3
249 TestMultiNode/serial/StopMultiNode 21.7
250 TestMultiNode/serial/RestartMultiNode 51.5
251 TestMultiNode/serial/ValidateNameConflict 25.85
256 TestPreload 110.28
258 TestScheduledStopUnix 95.6
261 TestInsufficientStorage 10.65
262 TestRunningBinaryUpgrade 54.43
264 TestKubernetesUpgrade 338.14
265 TestMissingContainerUpgrade 102.84
284 TestStoppedBinaryUpgrade/Setup 4.4
285 TestStoppedBinaryUpgrade/Upgrade 51.58
286 TestStoppedBinaryUpgrade/MinikubeLogs 0.85
288 TestNoKubernetes/serial/StartNoK8sWithVersion 0.07
289 TestNoKubernetes/serial/StartWithK8s 24.4
291 TestPause/serial/Start 64.51
292 TestNoKubernetes/serial/StartWithStopK8s 18.86
293 TestNoKubernetes/serial/Start 7.84
294 TestNoKubernetes/serial/VerifyK8sNotRunning 0.27
295 TestNoKubernetes/serial/ProfileList 31.64
296 TestNoKubernetes/serial/Stop 1.21
297 TestNoKubernetes/serial/StartNoArgs 8.69
298 TestPause/serial/SecondStartNoReconfiguration 46.29
299 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.28
300 TestNetworkPlugins/group/auto/Start 65.3
301 TestPause/serial/Pause 0.66
302 TestPause/serial/VerifyStatus 0.38
303 TestPause/serial/Unpause 0.5
304 TestPause/serial/PauseAgain 0.69
305 TestPause/serial/DeletePaused 2.22
306 TestPause/serial/VerifyDeletedResources 13.12
307 TestNetworkPlugins/group/flannel/Start 41.15
308 TestNetworkPlugins/group/enable-default-cni/Start 71.63
309 TestNetworkPlugins/group/flannel/ControllerPod 6.01
310 TestNetworkPlugins/group/auto/KubeletFlags 0.31
311 TestNetworkPlugins/group/auto/NetCatPod 10.2
312 TestNetworkPlugins/group/flannel/KubeletFlags 0.28
313 TestNetworkPlugins/group/flannel/NetCatPod 8.18
314 TestNetworkPlugins/group/auto/DNS 0.13
315 TestNetworkPlugins/group/auto/Localhost 0.12
316 TestNetworkPlugins/group/auto/HairPin 0.12
317 TestNetworkPlugins/group/flannel/DNS 0.14
318 TestNetworkPlugins/group/flannel/Localhost 0.12
319 TestNetworkPlugins/group/flannel/HairPin 0.11
320 TestNetworkPlugins/group/bridge/Start 66.87
321 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.44
322 TestNetworkPlugins/group/enable-default-cni/NetCatPod 11.51
323 TestNetworkPlugins/group/kubenet/Start 43.78
324 TestNetworkPlugins/group/enable-default-cni/DNS 0.16
325 TestNetworkPlugins/group/enable-default-cni/Localhost 0.13
326 TestNetworkPlugins/group/enable-default-cni/HairPin 0.14
327 TestNetworkPlugins/group/calico/Start 48.66
328 TestNetworkPlugins/group/kubenet/KubeletFlags 0.32
329 TestNetworkPlugins/group/kubenet/NetCatPod 9.2
330 TestNetworkPlugins/group/kindnet/Start 55.89
331 TestNetworkPlugins/group/kubenet/DNS 0.17
332 TestNetworkPlugins/group/kubenet/Localhost 0.14
333 TestNetworkPlugins/group/kubenet/HairPin 0.14
334 TestNetworkPlugins/group/bridge/KubeletFlags 0.32
335 TestNetworkPlugins/group/bridge/NetCatPod 11.23
336 TestNetworkPlugins/group/bridge/DNS 0.18
337 TestNetworkPlugins/group/bridge/Localhost 0.19
338 TestNetworkPlugins/group/bridge/HairPin 0.23
339 TestNetworkPlugins/group/custom-flannel/Start 45.69
340 TestNetworkPlugins/group/calico/ControllerPod 6.01
341 TestNetworkPlugins/group/calico/KubeletFlags 0.3
342 TestNetworkPlugins/group/calico/NetCatPod 10.22
343 TestNetworkPlugins/group/false/Start 67.12
344 TestNetworkPlugins/group/calico/DNS 0.16
345 TestNetworkPlugins/group/calico/Localhost 0.13
346 TestNetworkPlugins/group/calico/HairPin 0.14
347 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
348 TestNetworkPlugins/group/kindnet/KubeletFlags 0.33
349 TestNetworkPlugins/group/kindnet/NetCatPod 9.23
350 TestNetworkPlugins/group/kindnet/DNS 0.19
351 TestNetworkPlugins/group/kindnet/Localhost 0.16
352 TestNetworkPlugins/group/kindnet/HairPin 0.12
354 TestStartStop/group/old-k8s-version/serial/FirstStart 43.29
355 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.39
356 TestNetworkPlugins/group/custom-flannel/NetCatPod 10.96
357 TestNetworkPlugins/group/custom-flannel/DNS 0.15
358 TestNetworkPlugins/group/custom-flannel/Localhost 0.14
359 TestNetworkPlugins/group/custom-flannel/HairPin 0.14
361 TestStartStop/group/no-preload/serial/FirstStart 73.95
363 TestStartStop/group/embed-certs/serial/FirstStart 65.89
364 TestNetworkPlugins/group/false/KubeletFlags 0.4
365 TestStartStop/group/old-k8s-version/serial/DeployApp 10.32
366 TestNetworkPlugins/group/false/NetCatPod 9.22
367 TestNetworkPlugins/group/false/DNS 0.14
368 TestNetworkPlugins/group/false/Localhost 0.12
369 TestNetworkPlugins/group/false/HairPin 0.12
370 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.87
371 TestStartStop/group/old-k8s-version/serial/Stop 10.86
372 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.2
373 TestStartStop/group/old-k8s-version/serial/SecondStart 48
375 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 40.88
376 TestStartStop/group/no-preload/serial/DeployApp 11.25
377 TestStartStop/group/embed-certs/serial/DeployApp 8.25
378 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.82
379 TestStartStop/group/no-preload/serial/Stop 10.82
380 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.78
381 TestStartStop/group/embed-certs/serial/Stop 10.88
382 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6
383 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 8.27
384 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.2
385 TestStartStop/group/no-preload/serial/SecondStart 46.94
386 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.09
387 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.82
388 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.01
389 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.2
390 TestStartStop/group/embed-certs/serial/SecondStart 54.97
391 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.28
392 TestStartStop/group/old-k8s-version/serial/Pause 2.9
394 TestStartStop/group/newest-cni/serial/FirstStart 31.19
395 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.25
396 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 49.53
397 TestStartStop/group/newest-cni/serial/DeployApp 0
398 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.72
399 TestStartStop/group/newest-cni/serial/Stop 10.83
400 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
401 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.08
402 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.18
403 TestStartStop/group/newest-cni/serial/SecondStart 13.51
404 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.24
405 TestStartStop/group/no-preload/serial/Pause 2.49
406 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
407 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
408 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.13
409 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
410 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
411 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.26
412 TestStartStop/group/newest-cni/serial/Pause 2.44
413 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.26
414 TestStartStop/group/embed-certs/serial/Pause 2.47
415 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.07
416 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.22
417 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.35
x
+
TestDownloadOnly/v1.28.0/json-events (31.73s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-609983 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=docker --driver=docker  --container-runtime=docker
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-609983 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=docker --driver=docker  --container-runtime=docker: (31.727928297s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (31.73s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1013 13:34:09.672045  849401 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime docker
I1013 13:34:09.672171  849401 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21724-845765/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-docker-overlay2-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-609983
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-609983: exit status 85 (60.427707ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                     ARGS                                                                                      │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-609983 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=docker --driver=docker  --container-runtime=docker │ download-only-609983 │ jenkins │ v1.37.0 │ 13 Oct 25 13:33 UTC │          │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/13 13:33:37
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1013 13:33:37.987012  849413 out.go:360] Setting OutFile to fd 1 ...
	I1013 13:33:37.987290  849413 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 13:33:37.987310  849413 out.go:374] Setting ErrFile to fd 2...
	I1013 13:33:37.987320  849413 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 13:33:37.987633  849413 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21724-845765/.minikube/bin
	W1013 13:33:37.987858  849413 root.go:314] Error reading config file at /home/jenkins/minikube-integration/21724-845765/.minikube/config/config.json: open /home/jenkins/minikube-integration/21724-845765/.minikube/config/config.json: no such file or directory
	I1013 13:33:37.988427  849413 out.go:368] Setting JSON to true
	I1013 13:33:37.989533  849413 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":22551,"bootTime":1760339867,"procs":192,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1013 13:33:37.989644  849413 start.go:141] virtualization: kvm guest
	I1013 13:33:37.991322  849413 out.go:99] [download-only-609983] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1013 13:33:37.991500  849413 notify.go:220] Checking for updates...
	W1013 13:33:37.991502  849413 preload.go:349] Failed to list preload files: open /home/jenkins/minikube-integration/21724-845765/.minikube/cache/preloaded-tarball: no such file or directory
	I1013 13:33:37.992671  849413 out.go:171] MINIKUBE_LOCATION=21724
	I1013 13:33:37.993904  849413 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1013 13:33:37.995003  849413 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21724-845765/kubeconfig
	I1013 13:33:37.996150  849413 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21724-845765/.minikube
	I1013 13:33:37.997307  849413 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1013 13:33:37.999326  849413 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1013 13:33:37.999636  849413 driver.go:421] Setting default libvirt URI to qemu:///system
	I1013 13:33:38.025877  849413 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1013 13:33:38.025994  849413 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1013 13:33:38.081731  849413 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:28 OomKillDisable:false NGoroutines:46 SystemTime:2025-10-13 13:33:38.072383909 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1013 13:33:38.081893  849413 docker.go:318] overlay module found
	I1013 13:33:38.083495  849413 out.go:99] Using the docker driver based on user configuration
	I1013 13:33:38.083523  849413 start.go:305] selected driver: docker
	I1013 13:33:38.083528  849413 start.go:925] validating driver "docker" against <nil>
	I1013 13:33:38.083626  849413 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1013 13:33:38.139155  849413 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:28 OomKillDisable:false NGoroutines:46 SystemTime:2025-10-13 13:33:38.129612693 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1013 13:33:38.139312  849413 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1013 13:33:38.139815  849413 start_flags.go:410] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I1013 13:33:38.139971  849413 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1013 13:33:38.141802  849413 out.go:171] Using Docker driver with root privileges
	I1013 13:33:38.143006  849413 cni.go:84] Creating CNI manager for ""
	I1013 13:33:38.143174  849413 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1013 13:33:38.143198  849413 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1013 13:33:38.143284  849413 start.go:349] cluster config:
	{Name:download-only-609983 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-609983 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1013 13:33:38.144545  849413 out.go:99] Starting "download-only-609983" primary control-plane node in "download-only-609983" cluster
	I1013 13:33:38.144586  849413 cache.go:123] Beginning downloading kic base image for docker with docker
	I1013 13:33:38.145811  849413 out.go:99] Pulling base image v0.0.48-1759745255-21703 ...
	I1013 13:33:38.145839  849413 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime docker
	I1013 13:33:38.145978  849413 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon
	I1013 13:33:38.162396  849413 cache.go:152] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 to local cache
	I1013 13:33:38.162609  849413 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local cache directory
	I1013 13:33:38.162698  849413 image.go:150] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 to local cache
	I1013 13:33:38.252130  849413 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-docker-overlay2-amd64.tar.lz4
	I1013 13:33:38.252180  849413 cache.go:58] Caching tarball of preloaded images
	I1013 13:33:38.252916  849413 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime docker
	I1013 13:33:38.254654  849413 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I1013 13:33:38.254679  849413 preload.go:313] getting checksum for preloaded-images-k8s-v18-v1.28.0-docker-overlay2-amd64.tar.lz4 from gcs api...
	I1013 13:33:38.366941  849413 preload.go:290] Got checksum from GCS API "8a955be835827bc584bcce0658a7fcc9"
	I1013 13:33:38.367066  849413 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-docker-overlay2-amd64.tar.lz4?checksum=md5:8a955be835827bc584bcce0658a7fcc9 -> /home/jenkins/minikube-integration/21724-845765/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-docker-overlay2-amd64.tar.lz4
	I1013 13:33:50.249551  849413 cache.go:61] Finished verifying existence of preloaded tar for v1.28.0 on docker
	I1013 13:33:50.249954  849413 profile.go:143] Saving config to /home/jenkins/minikube-integration/21724-845765/.minikube/profiles/download-only-609983/config.json ...
	I1013 13:33:50.250008  849413 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-845765/.minikube/profiles/download-only-609983/config.json: {Name:mka5d2d84fec56ec2824e9c2c7d6ba852f99ae98 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 13:33:50.250237  849413 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime docker
	I1013 13:33:50.250484  849413 download.go:108] Downloading: https://dl.k8s.io/release/v1.28.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.0/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/21724-845765/.minikube/cache/linux/amd64/v1.28.0/kubectl
	
	
	* The control-plane node download-only-609983 host does not exist
	  To start a cluster, run: "minikube start -p download-only-609983"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-609983
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/json-events (9.98s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-033886 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=docker --driver=docker  --container-runtime=docker
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-033886 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=docker --driver=docker  --container-runtime=docker: (9.978687833s)
--- PASS: TestDownloadOnly/v1.34.1/json-events (9.98s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/preload-exists
I1013 13:34:20.065301  849401 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime docker
I1013 13:34:20.065353  849401 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21724-845765/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-docker-overlay2-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-033886
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-033886: exit status 85 (62.703054ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                     ARGS                                                                                      │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-609983 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=docker --driver=docker  --container-runtime=docker │ download-only-609983 │ jenkins │ v1.37.0 │ 13 Oct 25 13:33 UTC │                     │
	│ delete  │ --all                                                                                                                                                                         │ minikube             │ jenkins │ v1.37.0 │ 13 Oct 25 13:34 UTC │ 13 Oct 25 13:34 UTC │
	│ delete  │ -p download-only-609983                                                                                                                                                       │ download-only-609983 │ jenkins │ v1.37.0 │ 13 Oct 25 13:34 UTC │ 13 Oct 25 13:34 UTC │
	│ start   │ -o=json --download-only -p download-only-033886 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=docker --driver=docker  --container-runtime=docker │ download-only-033886 │ jenkins │ v1.37.0 │ 13 Oct 25 13:34 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/13 13:34:10
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1013 13:34:10.129371  849850 out.go:360] Setting OutFile to fd 1 ...
	I1013 13:34:10.129659  849850 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 13:34:10.129669  849850 out.go:374] Setting ErrFile to fd 2...
	I1013 13:34:10.129674  849850 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 13:34:10.129883  849850 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21724-845765/.minikube/bin
	I1013 13:34:10.130430  849850 out.go:368] Setting JSON to true
	I1013 13:34:10.131455  849850 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":22583,"bootTime":1760339867,"procs":187,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1013 13:34:10.131558  849850 start.go:141] virtualization: kvm guest
	I1013 13:34:10.133396  849850 out.go:99] [download-only-033886] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1013 13:34:10.133596  849850 notify.go:220] Checking for updates...
	I1013 13:34:10.134844  849850 out.go:171] MINIKUBE_LOCATION=21724
	I1013 13:34:10.136350  849850 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1013 13:34:10.137564  849850 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21724-845765/kubeconfig
	I1013 13:34:10.138678  849850 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21724-845765/.minikube
	I1013 13:34:10.139900  849850 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1013 13:34:10.141905  849850 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1013 13:34:10.142196  849850 driver.go:421] Setting default libvirt URI to qemu:///system
	I1013 13:34:10.166134  849850 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1013 13:34:10.166195  849850 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1013 13:34:10.219695  849850 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:28 OomKillDisable:false NGoroutines:46 SystemTime:2025-10-13 13:34:10.210127796 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1013 13:34:10.219806  849850 docker.go:318] overlay module found
	I1013 13:34:10.221366  849850 out.go:99] Using the docker driver based on user configuration
	I1013 13:34:10.221400  849850 start.go:305] selected driver: docker
	I1013 13:34:10.221406  849850 start.go:925] validating driver "docker" against <nil>
	I1013 13:34:10.221492  849850 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1013 13:34:10.275318  849850 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:28 OomKillDisable:false NGoroutines:46 SystemTime:2025-10-13 13:34:10.265576845 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1013 13:34:10.275504  849850 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1013 13:34:10.276030  849850 start_flags.go:410] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I1013 13:34:10.276219  849850 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1013 13:34:10.278050  849850 out.go:171] Using Docker driver with root privileges
	I1013 13:34:10.279154  849850 cni.go:84] Creating CNI manager for ""
	I1013 13:34:10.279223  849850 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1013 13:34:10.279239  849850 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1013 13:34:10.279305  849850 start.go:349] cluster config:
	{Name:download-only-033886 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:download-only-033886 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1013 13:34:10.280503  849850 out.go:99] Starting "download-only-033886" primary control-plane node in "download-only-033886" cluster
	I1013 13:34:10.280537  849850 cache.go:123] Beginning downloading kic base image for docker with docker
	I1013 13:34:10.281633  849850 out.go:99] Pulling base image v0.0.48-1759745255-21703 ...
	I1013 13:34:10.281656  849850 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime docker
	I1013 13:34:10.281777  849850 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon
	I1013 13:34:10.297542  849850 cache.go:152] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 to local cache
	I1013 13:34:10.297669  849850 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local cache directory
	I1013 13:34:10.297689  849850 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local cache directory, skipping pull
	I1013 13:34:10.297698  849850 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 exists in cache, skipping pull
	I1013 13:34:10.297708  849850 cache.go:155] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 as a tarball
	I1013 13:34:10.393045  849850 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.1/preloaded-images-k8s-v18-v1.34.1-docker-overlay2-amd64.tar.lz4
	I1013 13:34:10.393103  849850 cache.go:58] Caching tarball of preloaded images
	I1013 13:34:10.393294  849850 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime docker
	I1013 13:34:10.395047  849850 out.go:99] Downloading Kubernetes v1.34.1 preload ...
	I1013 13:34:10.395068  849850 preload.go:313] getting checksum for preloaded-images-k8s-v18-v1.34.1-docker-overlay2-amd64.tar.lz4 from gcs api...
	I1013 13:34:10.506298  849850 preload.go:290] Got checksum from GCS API "d7f0ccd752ff15c628c6fc8ef8c8033e"
	I1013 13:34:10.506356  849850 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.1/preloaded-images-k8s-v18-v1.34.1-docker-overlay2-amd64.tar.lz4?checksum=md5:d7f0ccd752ff15c628c6fc8ef8c8033e -> /home/jenkins/minikube-integration/21724-845765/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-docker-overlay2-amd64.tar.lz4
	
	
	* The control-plane node download-only-033886 host does not exist
	  To start a cluster, run: "minikube start -p download-only-033886"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.1/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.34.1/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-033886
--- PASS: TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.38s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:231: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p download-docker-259400 --alsologtostderr --driver=docker  --container-runtime=docker
helpers_test.go:175: Cleaning up "download-docker-259400" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p download-docker-259400
--- PASS: TestDownloadOnlyKic (0.38s)

                                                
                                    
x
+
TestBinaryMirror (0.88s)

                                                
                                                
=== RUN   TestBinaryMirror
I1013 13:34:21.134454  849401 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-847127 --alsologtostderr --binary-mirror http://127.0.0.1:38739 --driver=docker  --container-runtime=docker
helpers_test.go:175: Cleaning up "binary-mirror-847127" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-847127
--- PASS: TestBinaryMirror (0.88s)

                                                
                                    
x
+
TestOffline (77.43s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-docker-923786 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=docker
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-docker-923786 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=docker: (1m13.862466001s)
helpers_test.go:175: Cleaning up "offline-docker-923786" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-docker-923786
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-docker-923786: (3.564106264s)
--- PASS: TestOffline (77.43s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1000: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-789670
addons_test.go:1000: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-789670: exit status 85 (55.789396ms)

                                                
                                                
-- stdout --
	* Profile "addons-789670" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-789670"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1011: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-789670
addons_test.go:1011: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-789670: exit status 85 (56.762946ms)

                                                
                                                
-- stdout --
	* Profile "addons-789670" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-789670"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/Setup (128.38s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p addons-789670 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=docker --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:108: (dbg) Done: out/minikube-linux-amd64 start -p addons-789670 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=docker --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m8.379491296s)
--- PASS: TestAddons/Setup (128.38s)

                                                
                                    
x
+
TestAddons/serial/Volcano (41.15s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:884: volcano-controller stabilized in 15.792263ms
addons_test.go:868: volcano-scheduler stabilized in 16.217889ms
addons_test.go:876: volcano-admission stabilized in 16.268088ms
addons_test.go:890: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:352: "volcano-scheduler-76c996c8bf-2gnrj" [272e8c39-8a2c-4d43-9175-96e9f9b4bfa5] Running
addons_test.go:890: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 5.00457006s
addons_test.go:894: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:352: "volcano-admission-6c447bd768-v9fzh" [cefc5da5-70e4-4608-833f-82d2f51617e0] Running
addons_test.go:894: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.003015282s
addons_test.go:898: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:352: "volcano-controllers-6fd4f85cb8-l5h9z" [ad3b863b-2c07-44d7-ab77-44a6d4eae544] Running
addons_test.go:898: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.003356516s
addons_test.go:903: (dbg) Run:  kubectl --context addons-789670 delete -n volcano-system job volcano-admission-init
addons_test.go:909: (dbg) Run:  kubectl --context addons-789670 create -f testdata/vcjob.yaml
addons_test.go:917: (dbg) Run:  kubectl --context addons-789670 get vcjob -n my-volcano
addons_test.go:935: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:352: "test-job-nginx-0" [cbb4c4b3-cbae-40a6-93cd-346ea9e787b4] Pending
helpers_test.go:352: "test-job-nginx-0" [cbb4c4b3-cbae-40a6-93cd-346ea9e787b4] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "test-job-nginx-0" [cbb4c4b3-cbae-40a6-93cd-346ea9e787b4] Running
addons_test.go:935: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 14.003498231s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-789670 addons disable volcano --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-789670 addons disable volcano --alsologtostderr -v=1: (11.817999097s)
--- PASS: TestAddons/serial/Volcano (41.15s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:630: (dbg) Run:  kubectl --context addons-789670 create ns new-namespace
addons_test.go:644: (dbg) Run:  kubectl --context addons-789670 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (9.48s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:675: (dbg) Run:  kubectl --context addons-789670 create -f testdata/busybox.yaml
addons_test.go:682: (dbg) Run:  kubectl --context addons-789670 create sa gcp-auth-test
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [404a58e9-6dd9-44c6-9974-cd7b2cb82f83] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [404a58e9-6dd9-44c6-9974-cd7b2cb82f83] Running
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 9.003481306s
addons_test.go:694: (dbg) Run:  kubectl --context addons-789670 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:706: (dbg) Run:  kubectl --context addons-789670 describe sa gcp-auth-test
addons_test.go:744: (dbg) Run:  kubectl --context addons-789670 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (9.48s)

                                                
                                    
x
+
TestAddons/parallel/Registry (15.05s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:382: registry stabilized in 3.811713ms
addons_test.go:384: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-66898fdd98-dkpw4" [d7d9c153-8c84-422c-b39d-154b863e8013] Running
addons_test.go:384: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.003481316s
addons_test.go:387: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-proxy-h4c88" [9963de4b-7967-4edd-b02d-5cfa317e09af] Running
addons_test.go:387: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.003449807s
addons_test.go:392: (dbg) Run:  kubectl --context addons-789670 delete po -l run=registry-test --now
addons_test.go:397: (dbg) Run:  kubectl --context addons-789670 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:397: (dbg) Done: kubectl --context addons-789670 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.269087284s)
addons_test.go:411: (dbg) Run:  out/minikube-linux-amd64 -p addons-789670 ip
2025/10/13 13:37:44 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-789670 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (15.05s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.67s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:323: registry-creds stabilized in 3.536494ms
addons_test.go:325: (dbg) Run:  out/minikube-linux-amd64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-789670
addons_test.go:332: (dbg) Run:  kubectl --context addons-789670 -n kube-system get secret -o yaml
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-789670 addons disable registry-creds --alsologtostderr -v=1
--- PASS: TestAddons/parallel/RegistryCreds (0.67s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (21.41s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-789670 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-789670 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-789670 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:352: "nginx" [87b5f309-7e29-4fb1-a88a-80eb027d842f] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx" [87b5f309-7e29-4fb1-a88a-80eb027d842f] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 11.004113242s
I1013 13:37:52.332408  849401 kapi.go:150] Service nginx in namespace default found.
addons_test.go:264: (dbg) Run:  out/minikube-linux-amd64 -p addons-789670 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:288: (dbg) Run:  kubectl --context addons-789670 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-amd64 -p addons-789670 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-789670 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-789670 addons disable ingress-dns --alsologtostderr -v=1: (1.580409723s)
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-789670 addons disable ingress --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-789670 addons disable ingress --alsologtostderr -v=1: (7.654469033s)
--- PASS: TestAddons/parallel/Ingress (21.41s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (5.22s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:352: "gadget-w9r5m" [a83a0181-a8dd-4e64-927a-8833cb34c734] Running
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.004145956s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-789670 addons disable inspektor-gadget --alsologtostderr -v=1
--- PASS: TestAddons/parallel/InspektorGadget (5.22s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.61s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:455: metrics-server stabilized in 3.934862ms
I1013 13:37:30.074170  849401 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1013 13:37:30.074198  849401 kapi.go:107] duration metric: took 3.889469ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:352: "metrics-server-85b7d694d7-gk4h4" [3a871c2e-e7a3-4e69-aeb8-0bf608969e87] Running
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.003265021s
addons_test.go:463: (dbg) Run:  kubectl --context addons-789670 top pods -n kube-system
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-789670 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.61s)

                                                
                                    
x
+
TestAddons/parallel/CSI (51.35s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1013 13:37:30.070326  849401 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
addons_test.go:549: csi-hostpath-driver pods stabilized in 3.901127ms
addons_test.go:552: (dbg) Run:  kubectl --context addons-789670 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:557: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-789670 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-789670 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-789670 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-789670 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-789670 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-789670 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-789670 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-789670 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-789670 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:562: (dbg) Run:  kubectl --context addons-789670 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:567: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:352: "task-pv-pod" [50ece835-1bac-439c-a87d-f00eb10bc92c] Pending
helpers_test.go:352: "task-pv-pod" [50ece835-1bac-439c-a87d-f00eb10bc92c] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod" [50ece835-1bac-439c-a87d-f00eb10bc92c] Running
addons_test.go:567: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 7.004038089s
addons_test.go:572: (dbg) Run:  kubectl --context addons-789670 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:577: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:427: (dbg) Run:  kubectl --context addons-789670 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: (dbg) Run:  kubectl --context addons-789670 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:582: (dbg) Run:  kubectl --context addons-789670 delete pod task-pv-pod
addons_test.go:588: (dbg) Run:  kubectl --context addons-789670 delete pvc hpvc
addons_test.go:594: (dbg) Run:  kubectl --context addons-789670 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:599: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-789670 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-789670 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-789670 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-789670 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-789670 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-789670 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-789670 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-789670 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-789670 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-789670 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-789670 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-789670 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-789670 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-789670 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-789670 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-789670 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-789670 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-789670 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-789670 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:604: (dbg) Run:  kubectl --context addons-789670 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:609: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:352: "task-pv-pod-restore" [7a8f2f51-38ec-4a2b-bbcc-6fafef1b47dc] Pending
helpers_test.go:352: "task-pv-pod-restore" [7a8f2f51-38ec-4a2b-bbcc-6fafef1b47dc] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod-restore" [7a8f2f51-38ec-4a2b-bbcc-6fafef1b47dc] Running
addons_test.go:609: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.003892673s
addons_test.go:614: (dbg) Run:  kubectl --context addons-789670 delete pod task-pv-pod-restore
addons_test.go:614: (dbg) Done: kubectl --context addons-789670 delete pod task-pv-pod-restore: (1.104878052s)
addons_test.go:618: (dbg) Run:  kubectl --context addons-789670 delete pvc hpvc-restore
addons_test.go:622: (dbg) Run:  kubectl --context addons-789670 delete volumesnapshot new-snapshot-demo
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-789670 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-789670 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-789670 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.481945147s)
--- PASS: TestAddons/parallel/CSI (51.35s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (17.34s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:808: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-789670 --alsologtostderr -v=1
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:352: "headlamp-6945c6f4d-zcw47" [2f8157b0-9978-4994-a73d-91cffebcb82b] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:352: "headlamp-6945c6f4d-zcw47" [2f8157b0-9978-4994-a73d-91cffebcb82b] Running
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 11.003884567s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-789670 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-789670 addons disable headlamp --alsologtostderr -v=1: (5.626339033s)
--- PASS: TestAddons/parallel/Headlamp (17.34s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.45s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:352: "cloud-spanner-emulator-86bd5cbb97-l967r" [d9f087e0-5e79-41bc-bc77-81e4c81124d8] Running
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.003409342s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-789670 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (6.45s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (54.6s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:949: (dbg) Run:  kubectl --context addons-789670 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:955: (dbg) Run:  kubectl --context addons-789670 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:959: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-789670 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-789670 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-789670 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-789670 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-789670 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-789670 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-789670 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:352: "test-local-path" [137f8605-d9d0-479f-ae2c-e52e87043473] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "test-local-path" [137f8605-d9d0-479f-ae2c-e52e87043473] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "test-local-path" [137f8605-d9d0-479f-ae2c-e52e87043473] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 5.003744246s
addons_test.go:967: (dbg) Run:  kubectl --context addons-789670 get pvc test-pvc -o=json
addons_test.go:976: (dbg) Run:  out/minikube-linux-amd64 -p addons-789670 ssh "cat /opt/local-path-provisioner/pvc-a8415ec3-0760-4005-a6d9-9a1ea348ec6e_default_test-pvc/file1"
addons_test.go:988: (dbg) Run:  kubectl --context addons-789670 delete pod test-local-path
addons_test.go:992: (dbg) Run:  kubectl --context addons-789670 delete pvc test-pvc
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-789670 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-789670 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (42.69272053s)
--- PASS: TestAddons/parallel/LocalPath (54.60s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.43s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:352: "nvidia-device-plugin-daemonset-7687m" [5c1afe11-edd5-476b-84d9-27c0f7c05399] Running
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.00379814s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-789670 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.43s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (10.66s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:352: "yakd-dashboard-5ff678cb9-zpqm8" [c01202bb-744f-465f-951a-e34fbece95fb] Running
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.003340719s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-789670 addons disable yakd --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-789670 addons disable yakd --alsologtostderr -v=1: (5.650885152s)
--- PASS: TestAddons/parallel/Yakd (10.66s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (6.47s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1038: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: waiting 6m0s for pods matching "name=amd-gpu-device-plugin" in namespace "kube-system" ...
helpers_test.go:352: "amd-gpu-device-plugin-98gmc" [45d5086b-9e4f-4699-a2bd-05f5f36b8d0f] Running
addons_test.go:1038: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: name=amd-gpu-device-plugin healthy within 6.004580679s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-789670 addons disable amd-gpu-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/AmdGpuDevicePlugin (6.47s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (11.21s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-789670
addons_test.go:172: (dbg) Done: out/minikube-linux-amd64 stop -p addons-789670: (10.949831207s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-789670
addons_test.go:180: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-789670
addons_test.go:185: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-789670
--- PASS: TestAddons/StoppedEnableDisable (11.21s)

                                                
                                    
x
+
TestCertOptions (27.95s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-259290 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=docker
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-259290 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=docker: (25.04745439s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-259290 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-259290 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-259290 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-259290" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-259290
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-259290: (2.207560859s)
--- PASS: TestCertOptions (27.95s)

                                                
                                    
x
+
TestCertExpiration (248.37s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-037175 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=docker
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-037175 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=docker: (28.821984093s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-037175 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=docker
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-037175 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=docker: (35.780242382s)
helpers_test.go:175: Cleaning up "cert-expiration-037175" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-037175
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-037175: (3.770607121s)
--- PASS: TestCertExpiration (248.37s)

                                                
                                    
x
+
TestDockerFlags (35.3s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-linux-amd64 start -p docker-flags-462406 --cache-images=false --memory=3072 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
docker_test.go:51: (dbg) Done: out/minikube-linux-amd64 start -p docker-flags-462406 --cache-images=false --memory=3072 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (32.177167778s)
docker_test.go:56: (dbg) Run:  out/minikube-linux-amd64 -p docker-flags-462406 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:67: (dbg) Run:  out/minikube-linux-amd64 -p docker-flags-462406 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
helpers_test.go:175: Cleaning up "docker-flags-462406" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-flags-462406
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-flags-462406: (2.255978602s)
--- PASS: TestDockerFlags (35.30s)

                                                
                                    
x
+
TestForceSystemdFlag (42.11s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-983710 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-983710 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (39.471139746s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-983710 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-flag-983710" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-983710
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-983710: (2.249617877s)
--- PASS: TestForceSystemdFlag (42.11s)

                                                
                                    
x
+
TestForceSystemdEnv (31.11s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-517212 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-517212 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (28.563289171s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-env-517212 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-env-517212" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-517212
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-517212: (2.211252901s)
--- PASS: TestForceSystemdEnv (31.11s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0.9s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
I1013 14:13:12.202227  849401 install.go:66] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1013 14:13:12.202367  849401 install.go:138] Validating docker-machine-driver-kvm2, PATH=/tmp/TestKVMDriverInstallOrUpdate131089925/001:/home/jenkins/workspace/Docker_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I1013 14:13:12.230660  849401 install.go:163] /tmp/TestKVMDriverInstallOrUpdate131089925/001/docker-machine-driver-kvm2 version is 1.1.1
W1013 14:13:12.230695  849401 install.go:76] docker-machine-driver-kvm2: docker-machine-driver-kvm2 is version 1.1.1, want 1.37.0
W1013 14:13:12.230802  849401 out.go:176] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I1013 14:13:12.230853  849401 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.37.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.37.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate131089925/001/docker-machine-driver-kvm2
I1013 14:13:12.951515  849401 install.go:138] Validating docker-machine-driver-kvm2, PATH=/tmp/TestKVMDriverInstallOrUpdate131089925/001:/home/jenkins/workspace/Docker_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I1013 14:13:12.968671  849401 install.go:163] /tmp/TestKVMDriverInstallOrUpdate131089925/001/docker-machine-driver-kvm2 version is 1.37.0
--- PASS: TestKVMDriverInstallOrUpdate (0.90s)

                                                
                                    
x
+
TestErrorSpam/setup (21.57s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-633994 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-633994 --driver=docker  --container-runtime=docker
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-633994 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-633994 --driver=docker  --container-runtime=docker: (21.566924632s)
--- PASS: TestErrorSpam/setup (21.57s)

                                                
                                    
x
+
TestErrorSpam/start (0.64s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-633994 --log_dir /tmp/nospam-633994 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-633994 --log_dir /tmp/nospam-633994 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-633994 --log_dir /tmp/nospam-633994 start --dry-run
--- PASS: TestErrorSpam/start (0.64s)

                                                
                                    
x
+
TestErrorSpam/status (0.94s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-633994 --log_dir /tmp/nospam-633994 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-633994 --log_dir /tmp/nospam-633994 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-633994 --log_dir /tmp/nospam-633994 status
--- PASS: TestErrorSpam/status (0.94s)

                                                
                                    
x
+
TestErrorSpam/pause (1.25s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-633994 --log_dir /tmp/nospam-633994 pause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-633994 --log_dir /tmp/nospam-633994 pause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-633994 --log_dir /tmp/nospam-633994 pause
--- PASS: TestErrorSpam/pause (1.25s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.35s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-633994 --log_dir /tmp/nospam-633994 unpause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-633994 --log_dir /tmp/nospam-633994 unpause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-633994 --log_dir /tmp/nospam-633994 unpause
--- PASS: TestErrorSpam/unpause (1.35s)

                                                
                                    
x
+
TestErrorSpam/stop (10.93s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-633994 --log_dir /tmp/nospam-633994 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-633994 --log_dir /tmp/nospam-633994 stop: (10.746780329s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-633994 --log_dir /tmp/nospam-633994 stop
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-633994 --log_dir /tmp/nospam-633994 stop
--- PASS: TestErrorSpam/stop (10.93s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/21724-845765/.minikube/files/etc/test/nested/copy/849401/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (59.87s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-amd64 start -p functional-574138 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=docker
functional_test.go:2239: (dbg) Done: out/minikube-linux-amd64 start -p functional-574138 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=docker: (59.870261408s)
--- PASS: TestFunctional/serial/StartWithProxy (59.87s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (50.72s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1013 13:40:40.882942  849401 config.go:182] Loaded profile config "functional-574138": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
functional_test.go:674: (dbg) Run:  out/minikube-linux-amd64 start -p functional-574138 --alsologtostderr -v=8
E1013 13:41:30.452972  849401 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-845765/.minikube/profiles/addons-789670/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 13:41:30.459389  849401 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-845765/.minikube/profiles/addons-789670/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 13:41:30.470708  849401 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-845765/.minikube/profiles/addons-789670/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 13:41:30.492061  849401 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-845765/.minikube/profiles/addons-789670/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 13:41:30.533561  849401 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-845765/.minikube/profiles/addons-789670/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 13:41:30.615305  849401 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-845765/.minikube/profiles/addons-789670/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 13:41:30.776862  849401 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-845765/.minikube/profiles/addons-789670/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 13:41:31.098360  849401 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-845765/.minikube/profiles/addons-789670/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:674: (dbg) Done: out/minikube-linux-amd64 start -p functional-574138 --alsologtostderr -v=8: (50.718117292s)
functional_test.go:678: soft start took 50.718908652s for "functional-574138" cluster.
I1013 13:41:31.601513  849401 config.go:182] Loaded profile config "functional-574138": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/SoftStart (50.72s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-574138 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.39s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-574138 cache add registry.k8s.io/pause:3.1
E1013 13:41:31.739784  849401 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-845765/.minikube/profiles/addons-789670/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-574138 cache add registry.k8s.io/pause:3.3
E1013 13:41:33.021639  849401 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-845765/.minikube/profiles/addons-789670/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-574138 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.39s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.5s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-574138 /tmp/TestFunctionalserialCacheCmdcacheadd_local3611798508/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-amd64 -p functional-574138 cache add minikube-local-cache-test:functional-574138
functional_test.go:1104: (dbg) Done: out/minikube-linux-amd64 -p functional-574138 cache add minikube-local-cache-test:functional-574138: (1.132579263s)
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 -p functional-574138 cache delete minikube-local-cache-test:functional-574138
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-574138
E1013 13:41:35.583428  849401 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-845765/.minikube/profiles/addons-789670/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.50s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.28s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-amd64 -p functional-574138 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.28s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.31s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-amd64 -p functional-574138 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 -p functional-574138 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-574138 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (278.959279ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-574138 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-amd64 -p functional-574138 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.31s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-amd64 -p functional-574138 kubectl -- --context functional-574138 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-574138 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.10s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (51.55s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-amd64 start -p functional-574138 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1013 13:41:40.705597  849401 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-845765/.minikube/profiles/addons-789670/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 13:41:50.947906  849401 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-845765/.minikube/profiles/addons-789670/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 13:42:11.429935  849401 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-845765/.minikube/profiles/addons-789670/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Done: out/minikube-linux-amd64 start -p functional-574138 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (51.548699095s)
functional_test.go:776: restart took 51.548858662s for "functional-574138" cluster.
I1013 13:42:29.158325  849401 config.go:182] Loaded profile config "functional-574138": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/ExtraConfig (51.55s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-574138 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-amd64 -p functional-574138 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-amd64 -p functional-574138 logs: (1.00028238s)
--- PASS: TestFunctional/serial/LogsCmd (1.00s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.02s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-amd64 -p functional-574138 logs --file /tmp/TestFunctionalserialLogsFileCmd2502741460/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-amd64 -p functional-574138 logs --file /tmp/TestFunctionalserialLogsFileCmd2502741460/001/logs.txt: (1.018256391s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.02s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (3.98s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-574138 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-574138
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-574138: exit status 115 (352.957912ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:30903 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-574138 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (3.98s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-574138 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-574138 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-574138 config get cpus: exit status 14 (84.274934ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-574138 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-574138 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-574138 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-574138 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-574138 config get cpus: exit status 14 (51.002502ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (11.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-574138 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-574138 --alsologtostderr -v=1] ...
helpers_test.go:525: unable to kill pid 896637: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (11.50s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-amd64 start -p functional-574138 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-574138 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker: exit status 23 (174.75841ms)

                                                
                                                
-- stdout --
	* [functional-574138] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21724
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21724-845765/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21724-845765/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1013 13:42:37.325211  895946 out.go:360] Setting OutFile to fd 1 ...
	I1013 13:42:37.325531  895946 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 13:42:37.325544  895946 out.go:374] Setting ErrFile to fd 2...
	I1013 13:42:37.325550  895946 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 13:42:37.325817  895946 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21724-845765/.minikube/bin
	I1013 13:42:37.326377  895946 out.go:368] Setting JSON to false
	I1013 13:42:37.327755  895946 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":23090,"bootTime":1760339867,"procs":240,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1013 13:42:37.327878  895946 start.go:141] virtualization: kvm guest
	I1013 13:42:37.331231  895946 out.go:179] * [functional-574138] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1013 13:42:37.332695  895946 out.go:179]   - MINIKUBE_LOCATION=21724
	I1013 13:42:37.332721  895946 notify.go:220] Checking for updates...
	I1013 13:42:37.335319  895946 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1013 13:42:37.336620  895946 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21724-845765/kubeconfig
	I1013 13:42:37.337901  895946 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21724-845765/.minikube
	I1013 13:42:37.339101  895946 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1013 13:42:37.340224  895946 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1013 13:42:37.341781  895946 config.go:182] Loaded profile config "functional-574138": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
	I1013 13:42:37.342513  895946 driver.go:421] Setting default libvirt URI to qemu:///system
	I1013 13:42:37.367895  895946 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1013 13:42:37.367990  895946 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1013 13:42:37.432228  895946 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:38 OomKillDisable:false NGoroutines:57 SystemTime:2025-10-13 13:42:37.421531367 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1013 13:42:37.432384  895946 docker.go:318] overlay module found
	I1013 13:42:37.434179  895946 out.go:179] * Using the docker driver based on existing profile
	I1013 13:42:37.435401  895946 start.go:305] selected driver: docker
	I1013 13:42:37.435428  895946 start.go:925] validating driver "docker" against &{Name:functional-574138 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-574138 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1013 13:42:37.435539  895946 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1013 13:42:37.437623  895946 out.go:203] 
	W1013 13:42:37.438805  895946 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1013 13:42:37.439912  895946 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-amd64 start -p functional-574138 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
--- PASS: TestFunctional/parallel/DryRun (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-amd64 start -p functional-574138 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-574138 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker: exit status 23 (178.599495ms)

                                                
                                                
-- stdout --
	* [functional-574138] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21724
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21724-845765/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21724-845765/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1013 13:42:37.140938  895799 out.go:360] Setting OutFile to fd 1 ...
	I1013 13:42:37.141100  895799 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 13:42:37.141114  895799 out.go:374] Setting ErrFile to fd 2...
	I1013 13:42:37.141120  895799 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 13:42:37.141456  895799 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21724-845765/.minikube/bin
	I1013 13:42:37.142691  895799 out.go:368] Setting JSON to false
	I1013 13:42:37.144065  895799 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":23090,"bootTime":1760339867,"procs":242,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1013 13:42:37.144203  895799 start.go:141] virtualization: kvm guest
	I1013 13:42:37.145522  895799 out.go:179] * [functional-574138] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1013 13:42:37.146971  895799 notify.go:220] Checking for updates...
	I1013 13:42:37.147004  895799 out.go:179]   - MINIKUBE_LOCATION=21724
	I1013 13:42:37.148323  895799 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1013 13:42:37.149883  895799 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21724-845765/kubeconfig
	I1013 13:42:37.151212  895799 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21724-845765/.minikube
	I1013 13:42:37.152244  895799 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1013 13:42:37.153257  895799 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1013 13:42:37.154640  895799 config.go:182] Loaded profile config "functional-574138": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
	I1013 13:42:37.155240  895799 driver.go:421] Setting default libvirt URI to qemu:///system
	I1013 13:42:37.181073  895799 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1013 13:42:37.181204  895799 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1013 13:42:37.254949  895799 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:56 SystemTime:2025-10-13 13:42:37.241606476 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1013 13:42:37.255057  895799 docker.go:318] overlay module found
	I1013 13:42:37.257799  895799 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1013 13:42:37.258870  895799 start.go:305] selected driver: docker
	I1013 13:42:37.258886  895799 start.go:925] validating driver "docker" against &{Name:functional-574138 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-574138 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1013 13:42:37.259020  895799 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1013 13:42:37.260723  895799 out.go:203] 
	W1013 13:42:37.262251  895799 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1013 13:42:37.263630  895799 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-amd64 -p functional-574138 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-amd64 -p functional-574138 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-amd64 -p functional-574138 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.06s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (13.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-574138 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-574138 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-7d85dfc575-l6kr4" [db01cd12-05a9-44fb-bca5-b1fc3b9e4f97] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:352: "hello-node-connect-7d85dfc575-l6kr4" [db01cd12-05a9-44fb-bca5-b1fc3b9e4f97] Running
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 13.00448318s
functional_test.go:1654: (dbg) Run:  out/minikube-linux-amd64 -p functional-574138 service hello-node-connect --url
functional_test.go:1660: found endpoint for hello-node-connect: http://192.168.49.2:30857
functional_test.go:1680: http://192.168.49.2:30857: success! body:
Request served by hello-node-connect-7d85dfc575-l6kr4

                                                
                                                
HTTP/1.1 GET /

                                                
                                                
Host: 192.168.49.2:30857
Accept-Encoding: gzip
User-Agent: Go-http-client/1.1
--- PASS: TestFunctional/parallel/ServiceCmdConnect (13.77s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-amd64 -p functional-574138 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-574138 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (41.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [2e37f3fb-a175-4176-8305-44cbe71a9015] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.004028938s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-574138 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-574138 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-574138 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-574138 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [64372074-4629-4956-b73d-4e1188f37af8] Pending
helpers_test.go:352: "sp-pod" [64372074-4629-4956-b73d-4e1188f37af8] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [64372074-4629-4956-b73d-4e1188f37af8] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 15.00346766s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-574138 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-574138 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:112: (dbg) Done: kubectl --context functional-574138 delete -f testdata/storage-provisioner/pod.yaml: (1.325334176s)
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-574138 apply -f testdata/storage-provisioner/pod.yaml
I1013 13:43:00.722616  849401 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [50d6b7d5-2a9e-45ec-af9f-02e03bb99992] Pending
helpers_test.go:352: "sp-pod" [50d6b7d5-2a9e-45ec-af9f-02e03bb99992] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [50d6b7d5-2a9e-45ec-af9f-02e03bb99992] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 18.003688116s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-574138 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (41.15s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-amd64 -p functional-574138 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-574138 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.62s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-574138 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-574138 ssh -n functional-574138 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-574138 cp functional-574138:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd3119190237/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-574138 ssh -n functional-574138 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-574138 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-574138 ssh -n functional-574138 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.87s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (22.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1798: (dbg) Run:  kubectl --context functional-574138 replace --force -f testdata/mysql.yaml
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:352: "mysql-5bb876957f-q6wc5" [9654fe95-32bc-49b0-846e-85b2c0a89079] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:352: "mysql-5bb876957f-q6wc5" [9654fe95-32bc-49b0-846e-85b2c0a89079] Running
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 18.00340667s
functional_test.go:1812: (dbg) Run:  kubectl --context functional-574138 exec mysql-5bb876957f-q6wc5 -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-574138 exec mysql-5bb876957f-q6wc5 -- mysql -ppassword -e "show databases;": exit status 1 (119.08544ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1013 13:43:16.923998  849401 retry.go:31] will retry after 1.079906175s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-574138 exec mysql-5bb876957f-q6wc5 -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-574138 exec mysql-5bb876957f-q6wc5 -- mysql -ppassword -e "show databases;": exit status 1 (113.914784ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1013 13:43:18.118810  849401 retry.go:31] will retry after 1.147484318s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-574138 exec mysql-5bb876957f-q6wc5 -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-574138 exec mysql-5bb876957f-q6wc5 -- mysql -ppassword -e "show databases;": exit status 1 (128.765839ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1013 13:43:19.396032  849401 retry.go:31] will retry after 1.820131811s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-574138 exec mysql-5bb876957f-q6wc5 -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (22.69s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/849401/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-amd64 -p functional-574138 ssh "sudo cat /etc/test/nested/copy/849401/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/849401.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-574138 ssh "sudo cat /etc/ssl/certs/849401.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/849401.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-574138 ssh "sudo cat /usr/share/ca-certificates/849401.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-574138 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/8494012.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-574138 ssh "sudo cat /etc/ssl/certs/8494012.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/8494012.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-574138 ssh "sudo cat /usr/share/ca-certificates/8494012.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-574138 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.62s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-574138 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-574138 ssh "sudo systemctl is-active crio"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-574138 ssh "sudo systemctl is-active crio": exit status 1 (268.724014ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (8.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-574138 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-574138 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-75c85bcc94-jj7hj" [dc9affb9-6414-4ec1-84b8-6c15ad649425] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:352: "hello-node-75c85bcc94-jj7hj" [dc9affb9-6414-4ec1-84b8-6c15ad649425] Running
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 8.00607012s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (8.20s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (7.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-574138 /tmp/TestFunctionalparallelMountCmdany-port2140380534/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1760362955605137402" to /tmp/TestFunctionalparallelMountCmdany-port2140380534/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1760362955605137402" to /tmp/TestFunctionalparallelMountCmdany-port2140380534/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1760362955605137402" to /tmp/TestFunctionalparallelMountCmdany-port2140380534/001/test-1760362955605137402
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-574138 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-574138 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (355.835883ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1013 13:42:35.961341  849401 retry.go:31] will retry after 288.67827ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-574138 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-574138 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Oct 13 13:42 created-by-test
-rw-r--r-- 1 docker docker 24 Oct 13 13:42 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Oct 13 13:42 test-1760362955605137402
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-574138 ssh cat /mount-9p/test-1760362955605137402
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-574138 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:352: "busybox-mount" [0da64554-7e93-4704-933a-385b51f16173] Pending
helpers_test.go:352: "busybox-mount" [0da64554-7e93-4704-933a-385b51f16173] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:352: "busybox-mount" [0da64554-7e93-4704-933a-385b51f16173] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "busybox-mount" [0da64554-7e93-4704-933a-385b51f16173] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.003854514s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-574138 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-574138 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-574138 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-574138 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-574138 /tmp/TestFunctionalparallelMountCmdany-port2140380534/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (7.71s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1330: Took "385.242437ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1344: Took "53.575785ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1381: Took "374.277436ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1394: Took "66.254407ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-574138 /tmp/TestFunctionalparallelMountCmdspecific-port3188010680/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-574138 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-574138 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (327.355645ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1013 13:42:43.644938  849401 retry.go:31] will retry after 606.562915ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-574138 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-574138 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-574138 /tmp/TestFunctionalparallelMountCmdspecific-port3188010680/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-574138 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-574138 ssh "sudo umount -f /mount-9p": exit status 1 (320.949782ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-574138 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-574138 /tmp/TestFunctionalparallelMountCmdspecific-port3188010680/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.13s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-amd64 -p functional-574138 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-amd64 -p functional-574138 service list -o json
I1013 13:42:44.148776  849401 detect.go:223] nested VM detected
functional_test.go:1504: Took "644.149172ms" to run "out/minikube-linux-amd64 -p functional-574138 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.64s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-amd64 -p functional-574138 service --namespace=default --https --url hello-node
functional_test.go:1532: found endpoint: https://192.168.49.2:31934
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-amd64 -p functional-574138 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-574138 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4145296625/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-574138 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4145296625/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-574138 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4145296625/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-574138 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-574138 ssh "findmnt -T" /mount1: exit status 1 (421.153045ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1013 13:42:45.869515  849401 retry.go:31] will retry after 740.258215ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-574138 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-574138 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-574138 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-574138 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-574138 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4145296625/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-574138 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4145296625/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-574138 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4145296625/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.24s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-amd64 -p functional-574138 service hello-node --url
functional_test.go:1575: found endpoint for hello-node: http://192.168.49.2:31934
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-574138 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-574138 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-574138 tunnel --alsologtostderr] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-574138 tunnel --alsologtostderr] ...
helpers_test.go:525: unable to kill pid 899322: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-574138 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (14.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-574138 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:352: "nginx-svc" [4c462de0-a555-4adb-8ce4-be4abb904278] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx-svc" [4c462de0-a555-4adb-8ce4-be4abb904278] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 14.004291423s
I1013 13:43:01.562664  849401 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (14.24s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-amd64 -p functional-574138 version --short
--- PASS: TestFunctional/parallel/Version/short (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-amd64 -p functional-574138 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.60s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-574138 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-574138 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.1
registry.k8s.io/kube-proxy:v1.34.1
registry.k8s.io/kube-controller-manager:v1.34.1
registry.k8s.io/kube-apiserver:v1.34.1
registry.k8s.io/etcd:3.6.4-0
registry.k8s.io/coredns/coredns:v1.12.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-574138
docker.io/kubernetesui/metrics-scraper:<none>
docker.io/kubernetesui/dashboard:<none>
docker.io/kicbase/echo-server:latest
docker.io/kicbase/echo-server:functional-574138
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-574138 image ls --format short --alsologtostderr:
I1013 13:43:02.479594  903190 out.go:360] Setting OutFile to fd 1 ...
I1013 13:43:02.480050  903190 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1013 13:43:02.480060  903190 out.go:374] Setting ErrFile to fd 2...
I1013 13:43:02.480065  903190 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1013 13:43:02.480555  903190 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21724-845765/.minikube/bin
I1013 13:43:02.481971  903190 config.go:182] Loaded profile config "functional-574138": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
I1013 13:43:02.482234  903190 config.go:182] Loaded profile config "functional-574138": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
I1013 13:43:02.483156  903190 cli_runner.go:164] Run: docker container inspect functional-574138 --format={{.State.Status}}
I1013 13:43:02.507274  903190 ssh_runner.go:195] Run: systemctl --version
I1013 13:43:02.507340  903190 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-574138
I1013 13:43:02.532035  903190 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33153 SSHKeyPath:/home/jenkins/minikube-integration/21724-845765/.minikube/machines/functional-574138/id_rsa Username:docker}
I1013 13:43:02.647608  903190 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-574138 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-574138 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────────┬───────────────────┬───────────────┬────────┐
│                    IMAGE                    │        TAG        │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────────┼───────────────────┼───────────────┼────────┤
│ docker.io/library/nginx                     │ latest            │ 07ccdb7838758 │ 160MB  │
│ registry.k8s.io/etcd                        │ 3.6.4-0           │ 5f1f5298c888d │ 195MB  │
│ docker.io/kubernetesui/dashboard            │ <none>            │ 07655ddf2eebe │ 246MB  │
│ docker.io/kubernetesui/metrics-scraper      │ <none>            │ 115053965e86b │ 43.8MB │
│ registry.k8s.io/kube-apiserver              │ v1.34.1           │ c3994bc696102 │ 88MB   │
│ registry.k8s.io/kube-proxy                  │ v1.34.1           │ fc25172553d79 │ 71.9MB │
│ registry.k8s.io/coredns/coredns             │ v1.12.1           │ 52546a367cc9e │ 75MB   │
│ registry.k8s.io/pause                       │ 3.3               │ 0184c1613d929 │ 683kB  │
│ registry.k8s.io/pause                       │ 3.1               │ da86e6ba6ca19 │ 742kB  │
│ docker.io/library/nginx                     │ alpine            │ 5e7abcdd20216 │ 52.8MB │
│ registry.k8s.io/kube-scheduler              │ v1.34.1           │ 7dd6aaa1717ab │ 52.8MB │
│ docker.io/kicbase/echo-server               │ functional-574138 │ 9056ab77afb8e │ 4.94MB │
│ docker.io/kicbase/echo-server               │ latest            │ 9056ab77afb8e │ 4.94MB │
│ gcr.io/k8s-minikube/storage-provisioner     │ v5                │ 6e38f40d628db │ 31.5MB │
│ docker.io/library/minikube-local-cache-test │ functional-574138 │ d7f2284315c41 │ 30B    │
│ registry.k8s.io/kube-controller-manager     │ v1.34.1           │ c80c8dbafe7dd │ 74.9MB │
│ registry.k8s.io/pause                       │ 3.10.1            │ cd073f4c5f6a8 │ 736kB  │
│ gcr.io/k8s-minikube/busybox                 │ 1.28.4-glibc      │ 56cc512116c8f │ 4.4MB  │
│ registry.k8s.io/pause                       │ latest            │ 350b164e7ae1d │ 240kB  │
└─────────────────────────────────────────────┴───────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-574138 image ls --format table --alsologtostderr:
I1013 13:43:03.353867  903582 out.go:360] Setting OutFile to fd 1 ...
I1013 13:43:03.354188  903582 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1013 13:43:03.354205  903582 out.go:374] Setting ErrFile to fd 2...
I1013 13:43:03.354212  903582 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1013 13:43:03.354429  903582 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21724-845765/.minikube/bin
I1013 13:43:03.355052  903582 config.go:182] Loaded profile config "functional-574138": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
I1013 13:43:03.355186  903582 config.go:182] Loaded profile config "functional-574138": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
I1013 13:43:03.355767  903582 cli_runner.go:164] Run: docker container inspect functional-574138 --format={{.State.Status}}
I1013 13:43:03.376711  903582 ssh_runner.go:195] Run: systemctl --version
I1013 13:43:03.376762  903582 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-574138
I1013 13:43:03.398860  903582 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33153 SSHKeyPath:/home/jenkins/minikube-integration/21724-845765/.minikube/machines/functional-574138/id_rsa Username:docker}
I1013 13:43:03.513210  903582 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-574138 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-574138 image ls --format json --alsologtostderr:
[{"id":"7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.1"],"size":"52800000"},{"id":"fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.34.1"],"size":"71900000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"683000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"742000"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":[],"repoTags":["docker.io/kubernetesui/metrics-scraper:\u003cnone\u003e"],"size":"43800000"},{"id":"d7f2284315c41e78b6f0111c53d4d4a186266f3a73874ae42350c0a1d71f003b","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-574138"],"size":"30"},{"id":"c3994bc6961024917ec0aeee02e628
28108c21a52d87648e30f3080d9cbadc97","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.1"],"size":"88000000"},{"id":"5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.6.4-0"],"size":"195000000"},{"id":"cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"736000"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4400000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":[],"repoTags":["docker.i
o/kubernetesui/dashboard:\u003cnone\u003e"],"size":"246000000"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-574138","docker.io/kicbase/echo-server:latest"],"size":"4940000"},{"id":"5e7abcdd20216bbeedf1369529564ffd60f830ed3540c477938ca580b645dff5","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"52800000"},{"id":"07ccdb7838758e758a4d52a9761636c385125a327355c0c94a6acff9babff938","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"160000000"},{"id":"c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.1"],"size":"74900000"},{"id":"52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"75000000"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-574138 image ls --format json --alsologtostderr:
I1013 13:43:03.085181  903434 out.go:360] Setting OutFile to fd 1 ...
I1013 13:43:03.085595  903434 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1013 13:43:03.085608  903434 out.go:374] Setting ErrFile to fd 2...
I1013 13:43:03.085614  903434 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1013 13:43:03.085931  903434 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21724-845765/.minikube/bin
I1013 13:43:03.086846  903434 config.go:182] Loaded profile config "functional-574138": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
I1013 13:43:03.087114  903434 config.go:182] Loaded profile config "functional-574138": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
I1013 13:43:03.088405  903434 cli_runner.go:164] Run: docker container inspect functional-574138 --format={{.State.Status}}
I1013 13:43:03.113027  903434 ssh_runner.go:195] Run: systemctl --version
I1013 13:43:03.113204  903434 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-574138
I1013 13:43:03.134965  903434 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33153 SSHKeyPath:/home/jenkins/minikube-integration/21724-845765/.minikube/machines/functional-574138/id_rsa Username:docker}
I1013 13:43:03.250815  903434 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-574138 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-574138 image ls --format yaml --alsologtostderr:
- id: 5e7abcdd20216bbeedf1369529564ffd60f830ed3540c477938ca580b645dff5
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "52800000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "683000"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4400000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "742000"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests: []
repoTags:
- docker.io/kubernetesui/metrics-scraper:<none>
size: "43800000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: 7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.1
size: "52800000"
- id: fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.34.1
size: "71900000"
- id: 5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.6.4-0
size: "195000000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.1
size: "74900000"
- id: cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.10.1
size: "736000"
- id: 52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "75000000"
- id: d7f2284315c41e78b6f0111c53d4d4a186266f3a73874ae42350c0a1d71f003b
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-574138
size: "30"
- id: 07ccdb7838758e758a4d52a9761636c385125a327355c0c94a6acff9babff938
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "160000000"
- id: c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.1
size: "88000000"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests: []
repoTags:
- docker.io/kubernetesui/dashboard:<none>
size: "246000000"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-574138
- docker.io/kicbase/echo-server:latest
size: "4940000"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-574138 image ls --format yaml --alsologtostderr:
I1013 13:43:02.748822  903344 out.go:360] Setting OutFile to fd 1 ...
I1013 13:43:02.748952  903344 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1013 13:43:02.748965  903344 out.go:374] Setting ErrFile to fd 2...
I1013 13:43:02.748970  903344 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1013 13:43:02.749341  903344 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21724-845765/.minikube/bin
I1013 13:43:02.750348  903344 config.go:182] Loaded profile config "functional-574138": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
I1013 13:43:02.750504  903344 config.go:182] Loaded profile config "functional-574138": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
I1013 13:43:02.751016  903344 cli_runner.go:164] Run: docker container inspect functional-574138 --format={{.State.Status}}
I1013 13:43:02.773547  903344 ssh_runner.go:195] Run: systemctl --version
I1013 13:43:02.773608  903344 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-574138
I1013 13:43:02.797061  903344 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33153 SSHKeyPath:/home/jenkins/minikube-integration/21724-845765/.minikube/machines/functional-574138/id_rsa Username:docker}
I1013 13:43:02.910106  903344 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (4.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-amd64 -p functional-574138 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-574138 ssh pgrep buildkitd: exit status 1 (328.587257ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-amd64 -p functional-574138 image build -t localhost/my-image:functional-574138 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-amd64 -p functional-574138 image build -t localhost/my-image:functional-574138 testdata/build --alsologtostderr: (4.022697587s)
functional_test.go:338: (dbg) Stderr: out/minikube-linux-amd64 -p functional-574138 image build -t localhost/my-image:functional-574138 testdata/build --alsologtostderr:
I1013 13:43:03.340048  903576 out.go:360] Setting OutFile to fd 1 ...
I1013 13:43:03.340400  903576 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1013 13:43:03.340409  903576 out.go:374] Setting ErrFile to fd 2...
I1013 13:43:03.340415  903576 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1013 13:43:03.340752  903576 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21724-845765/.minikube/bin
I1013 13:43:03.341655  903576 config.go:182] Loaded profile config "functional-574138": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
I1013 13:43:03.342592  903576 config.go:182] Loaded profile config "functional-574138": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
I1013 13:43:03.343188  903576 cli_runner.go:164] Run: docker container inspect functional-574138 --format={{.State.Status}}
I1013 13:43:03.369462  903576 ssh_runner.go:195] Run: systemctl --version
I1013 13:43:03.369545  903576 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-574138
I1013 13:43:03.393811  903576 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33153 SSHKeyPath:/home/jenkins/minikube-integration/21724-845765/.minikube/machines/functional-574138/id_rsa Username:docker}
I1013 13:43:03.508530  903576 build_images.go:161] Building image from path: /tmp/build.3579796583.tar
I1013 13:43:03.508604  903576 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1013 13:43:03.519432  903576 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3579796583.tar
I1013 13:43:03.524318  903576 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3579796583.tar: stat -c "%s %y" /var/lib/minikube/build/build.3579796583.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3579796583.tar': No such file or directory
I1013 13:43:03.524360  903576 ssh_runner.go:362] scp /tmp/build.3579796583.tar --> /var/lib/minikube/build/build.3579796583.tar (3072 bytes)
I1013 13:43:03.549166  903576 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3579796583
I1013 13:43:03.561315  903576 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3579796583 -xf /var/lib/minikube/build/build.3579796583.tar
I1013 13:43:03.572461  903576 docker.go:361] Building image: /var/lib/minikube/build/build.3579796583
I1013 13:43:03.572543  903576 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-574138 /var/lib/minikube/build/build.3579796583
#0 building with "default" instance using docker driver

                                                
                                                
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.6s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.1s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.1s done
#5 sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 770B / 770B done
#5 sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee 527B / 527B done
#5 sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a 1.46kB / 1.46kB done
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0B / 772.79kB 0.1s
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 0.5s done
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0.0s done
#5 DONE 0.6s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 1.0s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.1s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.0s done
#8 writing image sha256:ca84bfb49b162e4baf753c2d63e35105482edb464195d89e7a184aff8be0c8be done
#8 naming to localhost/my-image:functional-574138 done
#8 DONE 0.0s
I1013 13:43:07.272823  903576 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-574138 /var/lib/minikube/build/build.3579796583: (3.700243519s)
I1013 13:43:07.272902  903576 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3579796583
I1013 13:43:07.283334  903576 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3579796583.tar
I1013 13:43:07.293278  903576 build_images.go:217] Built localhost/my-image:functional-574138 from /tmp/build.3579796583.tar
I1013 13:43:07.293315  903576 build_images.go:133] succeeded building to: functional-574138
I1013 13:43:07.293322  903576 build_images.go:134] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-574138 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (4.60s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.98s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
2025/10/13 13:42:48 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:357: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.962097997s)
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-574138
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.98s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-amd64 -p functional-574138 image load --daemon kicbase/echo-server:functional-574138 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-574138 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.06s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.92s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-574138 image load --daemon kicbase/echo-server:functional-574138 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-574138 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.92s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
E1013 13:42:52.391280  849401 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-845765/.minikube/profiles/addons-789670/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-574138
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-574138 image load --daemon kicbase/echo-server:functional-574138 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-574138 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.74s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-amd64 -p functional-574138 image save kicbase/echo-server:functional-574138 /home/jenkins/workspace/Docker_Linux_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-574138 image rm kicbase/echo-server:functional-574138 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-574138 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-574138 image load /home/jenkins/workspace/Docker_Linux_integration/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-574138 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.58s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-574138
functional_test.go:439: (dbg) Run:  out/minikube-linux-amd64 -p functional-574138 image save --daemon kicbase/echo-server:functional-574138 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect kicbase/echo-server:functional-574138
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (0.97s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:514: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-574138 docker-env) && out/minikube-linux-amd64 status -p functional-574138"
functional_test.go:537: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-574138 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (0.97s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-574138 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-574138 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-574138 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-574138 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.109.97.186 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-574138 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-574138
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-574138
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-574138
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (159.85s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 -p ha-132883 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=docker
E1013 13:44:14.315536  849401 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-845765/.minikube/profiles/addons-789670/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 -p ha-132883 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=docker: (2m39.102711755s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-132883 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (159.85s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (5.59s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 -p ha-132883 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 -p ha-132883 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 -p ha-132883 kubectl -- rollout status deployment/busybox: (3.501890171s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-132883 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 -p ha-132883 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-132883 kubectl -- exec busybox-7b57f96db7-9hfrg -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-132883 kubectl -- exec busybox-7b57f96db7-gtbts -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-132883 kubectl -- exec busybox-7b57f96db7-jdpbt -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-132883 kubectl -- exec busybox-7b57f96db7-9hfrg -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-132883 kubectl -- exec busybox-7b57f96db7-gtbts -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-132883 kubectl -- exec busybox-7b57f96db7-jdpbt -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-132883 kubectl -- exec busybox-7b57f96db7-9hfrg -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-132883 kubectl -- exec busybox-7b57f96db7-gtbts -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-132883 kubectl -- exec busybox-7b57f96db7-jdpbt -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (5.59s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.16s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 -p ha-132883 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-132883 kubectl -- exec busybox-7b57f96db7-9hfrg -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-132883 kubectl -- exec busybox-7b57f96db7-9hfrg -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-132883 kubectl -- exec busybox-7b57f96db7-gtbts -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-132883 kubectl -- exec busybox-7b57f96db7-gtbts -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-132883 kubectl -- exec busybox-7b57f96db7-jdpbt -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-132883 kubectl -- exec busybox-7b57f96db7-jdpbt -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.16s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (35.65s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 -p ha-132883 node add --alsologtostderr -v 5
E1013 13:46:30.453466  849401 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-845765/.minikube/profiles/addons-789670/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 -p ha-132883 node add --alsologtostderr -v 5: (34.747226596s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-132883 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (35.65s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-132883 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.94s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.94s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (17.58s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-132883 status --output json --alsologtostderr -v 5
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-132883 cp testdata/cp-test.txt ha-132883:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-132883 ssh -n ha-132883 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-132883 cp ha-132883:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile700585804/001/cp-test_ha-132883.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-132883 ssh -n ha-132883 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-132883 cp ha-132883:/home/docker/cp-test.txt ha-132883-m02:/home/docker/cp-test_ha-132883_ha-132883-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-132883 ssh -n ha-132883 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-132883 ssh -n ha-132883-m02 "sudo cat /home/docker/cp-test_ha-132883_ha-132883-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-132883 cp ha-132883:/home/docker/cp-test.txt ha-132883-m03:/home/docker/cp-test_ha-132883_ha-132883-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-132883 ssh -n ha-132883 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-132883 ssh -n ha-132883-m03 "sudo cat /home/docker/cp-test_ha-132883_ha-132883-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-132883 cp ha-132883:/home/docker/cp-test.txt ha-132883-m04:/home/docker/cp-test_ha-132883_ha-132883-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-132883 ssh -n ha-132883 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-132883 ssh -n ha-132883-m04 "sudo cat /home/docker/cp-test_ha-132883_ha-132883-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-132883 cp testdata/cp-test.txt ha-132883-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-132883 ssh -n ha-132883-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-132883 cp ha-132883-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile700585804/001/cp-test_ha-132883-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-132883 ssh -n ha-132883-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-132883 cp ha-132883-m02:/home/docker/cp-test.txt ha-132883:/home/docker/cp-test_ha-132883-m02_ha-132883.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-132883 ssh -n ha-132883-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-132883 ssh -n ha-132883 "sudo cat /home/docker/cp-test_ha-132883-m02_ha-132883.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-132883 cp ha-132883-m02:/home/docker/cp-test.txt ha-132883-m03:/home/docker/cp-test_ha-132883-m02_ha-132883-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-132883 ssh -n ha-132883-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-132883 ssh -n ha-132883-m03 "sudo cat /home/docker/cp-test_ha-132883-m02_ha-132883-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-132883 cp ha-132883-m02:/home/docker/cp-test.txt ha-132883-m04:/home/docker/cp-test_ha-132883-m02_ha-132883-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-132883 ssh -n ha-132883-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-132883 ssh -n ha-132883-m04 "sudo cat /home/docker/cp-test_ha-132883-m02_ha-132883-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-132883 cp testdata/cp-test.txt ha-132883-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-132883 ssh -n ha-132883-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-132883 cp ha-132883-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile700585804/001/cp-test_ha-132883-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-132883 ssh -n ha-132883-m03 "sudo cat /home/docker/cp-test.txt"
E1013 13:46:58.157043  849401 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-845765/.minikube/profiles/addons-789670/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-132883 cp ha-132883-m03:/home/docker/cp-test.txt ha-132883:/home/docker/cp-test_ha-132883-m03_ha-132883.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-132883 ssh -n ha-132883-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-132883 ssh -n ha-132883 "sudo cat /home/docker/cp-test_ha-132883-m03_ha-132883.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-132883 cp ha-132883-m03:/home/docker/cp-test.txt ha-132883-m02:/home/docker/cp-test_ha-132883-m03_ha-132883-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-132883 ssh -n ha-132883-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-132883 ssh -n ha-132883-m02 "sudo cat /home/docker/cp-test_ha-132883-m03_ha-132883-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-132883 cp ha-132883-m03:/home/docker/cp-test.txt ha-132883-m04:/home/docker/cp-test_ha-132883-m03_ha-132883-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-132883 ssh -n ha-132883-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-132883 ssh -n ha-132883-m04 "sudo cat /home/docker/cp-test_ha-132883-m03_ha-132883-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-132883 cp testdata/cp-test.txt ha-132883-m04:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-132883 ssh -n ha-132883-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-132883 cp ha-132883-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile700585804/001/cp-test_ha-132883-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-132883 ssh -n ha-132883-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-132883 cp ha-132883-m04:/home/docker/cp-test.txt ha-132883:/home/docker/cp-test_ha-132883-m04_ha-132883.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-132883 ssh -n ha-132883-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-132883 ssh -n ha-132883 "sudo cat /home/docker/cp-test_ha-132883-m04_ha-132883.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-132883 cp ha-132883-m04:/home/docker/cp-test.txt ha-132883-m02:/home/docker/cp-test_ha-132883-m04_ha-132883-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-132883 ssh -n ha-132883-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-132883 ssh -n ha-132883-m02 "sudo cat /home/docker/cp-test_ha-132883-m04_ha-132883-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-132883 cp ha-132883-m04:/home/docker/cp-test.txt ha-132883-m03:/home/docker/cp-test_ha-132883-m04_ha-132883-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-132883 ssh -n ha-132883-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-132883 ssh -n ha-132883-m03 "sudo cat /home/docker/cp-test_ha-132883-m04_ha-132883-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (17.58s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (11.51s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-132883 node stop m02 --alsologtostderr -v 5
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-132883 node stop m02 --alsologtostderr -v 5: (10.790177548s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-132883 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-132883 status --alsologtostderr -v 5: exit status 7 (720.857298ms)

                                                
                                                
-- stdout --
	ha-132883
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-132883-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-132883-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-132883-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1013 13:47:16.274392  931105 out.go:360] Setting OutFile to fd 1 ...
	I1013 13:47:16.274638  931105 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 13:47:16.274645  931105 out.go:374] Setting ErrFile to fd 2...
	I1013 13:47:16.274649  931105 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 13:47:16.274871  931105 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21724-845765/.minikube/bin
	I1013 13:47:16.275043  931105 out.go:368] Setting JSON to false
	I1013 13:47:16.275073  931105 mustload.go:65] Loading cluster: ha-132883
	I1013 13:47:16.275135  931105 notify.go:220] Checking for updates...
	I1013 13:47:16.275503  931105 config.go:182] Loaded profile config "ha-132883": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
	I1013 13:47:16.275528  931105 status.go:174] checking status of ha-132883 ...
	I1013 13:47:16.275961  931105 cli_runner.go:164] Run: docker container inspect ha-132883 --format={{.State.Status}}
	I1013 13:47:16.295166  931105 status.go:371] ha-132883 host status = "Running" (err=<nil>)
	I1013 13:47:16.295190  931105 host.go:66] Checking if "ha-132883" exists ...
	I1013 13:47:16.295427  931105 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-132883
	I1013 13:47:16.312687  931105 host.go:66] Checking if "ha-132883" exists ...
	I1013 13:47:16.312966  931105 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1013 13:47:16.313023  931105 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-132883
	I1013 13:47:16.330518  931105 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/21724-845765/.minikube/machines/ha-132883/id_rsa Username:docker}
	I1013 13:47:16.432877  931105 ssh_runner.go:195] Run: systemctl --version
	I1013 13:47:16.439382  931105 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1013 13:47:16.453055  931105 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1013 13:47:16.510678  931105 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:68 OomKillDisable:false NGoroutines:75 SystemTime:2025-10-13 13:47:16.500714689 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1013 13:47:16.511286  931105 kubeconfig.go:125] found "ha-132883" server: "https://192.168.49.254:8443"
	I1013 13:47:16.511320  931105 api_server.go:166] Checking apiserver status ...
	I1013 13:47:16.511379  931105 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1013 13:47:16.524735  931105 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2233/cgroup
	W1013 13:47:16.533104  931105 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2233/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1013 13:47:16.533161  931105 ssh_runner.go:195] Run: ls
	I1013 13:47:16.537284  931105 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1013 13:47:16.541963  931105 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1013 13:47:16.541989  931105 status.go:463] ha-132883 apiserver status = Running (err=<nil>)
	I1013 13:47:16.542003  931105 status.go:176] ha-132883 status: &{Name:ha-132883 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1013 13:47:16.542024  931105 status.go:174] checking status of ha-132883-m02 ...
	I1013 13:47:16.542336  931105 cli_runner.go:164] Run: docker container inspect ha-132883-m02 --format={{.State.Status}}
	I1013 13:47:16.561576  931105 status.go:371] ha-132883-m02 host status = "Stopped" (err=<nil>)
	I1013 13:47:16.561599  931105 status.go:384] host is not running, skipping remaining checks
	I1013 13:47:16.561606  931105 status.go:176] ha-132883-m02 status: &{Name:ha-132883-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1013 13:47:16.561635  931105 status.go:174] checking status of ha-132883-m03 ...
	I1013 13:47:16.561892  931105 cli_runner.go:164] Run: docker container inspect ha-132883-m03 --format={{.State.Status}}
	I1013 13:47:16.578554  931105 status.go:371] ha-132883-m03 host status = "Running" (err=<nil>)
	I1013 13:47:16.578581  931105 host.go:66] Checking if "ha-132883-m03" exists ...
	I1013 13:47:16.578832  931105 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-132883-m03
	I1013 13:47:16.596434  931105 host.go:66] Checking if "ha-132883-m03" exists ...
	I1013 13:47:16.596797  931105 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1013 13:47:16.596854  931105 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-132883-m03
	I1013 13:47:16.614401  931105 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33168 SSHKeyPath:/home/jenkins/minikube-integration/21724-845765/.minikube/machines/ha-132883-m03/id_rsa Username:docker}
	I1013 13:47:16.715825  931105 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1013 13:47:16.729181  931105 kubeconfig.go:125] found "ha-132883" server: "https://192.168.49.254:8443"
	I1013 13:47:16.729215  931105 api_server.go:166] Checking apiserver status ...
	I1013 13:47:16.729259  931105 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1013 13:47:16.741696  931105 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2107/cgroup
	W1013 13:47:16.750287  931105 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2107/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1013 13:47:16.750332  931105 ssh_runner.go:195] Run: ls
	I1013 13:47:16.754044  931105 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1013 13:47:16.758845  931105 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1013 13:47:16.758871  931105 status.go:463] ha-132883-m03 apiserver status = Running (err=<nil>)
	I1013 13:47:16.758881  931105 status.go:176] ha-132883-m03 status: &{Name:ha-132883-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1013 13:47:16.758900  931105 status.go:174] checking status of ha-132883-m04 ...
	I1013 13:47:16.759158  931105 cli_runner.go:164] Run: docker container inspect ha-132883-m04 --format={{.State.Status}}
	I1013 13:47:16.776565  931105 status.go:371] ha-132883-m04 host status = "Running" (err=<nil>)
	I1013 13:47:16.776590  931105 host.go:66] Checking if "ha-132883-m04" exists ...
	I1013 13:47:16.776847  931105 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-132883-m04
	I1013 13:47:16.795046  931105 host.go:66] Checking if "ha-132883-m04" exists ...
	I1013 13:47:16.795363  931105 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1013 13:47:16.795410  931105 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-132883-m04
	I1013 13:47:16.812210  931105 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33173 SSHKeyPath:/home/jenkins/minikube-integration/21724-845765/.minikube/machines/ha-132883-m04/id_rsa Username:docker}
	I1013 13:47:16.913585  931105 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1013 13:47:16.942836  931105 status.go:176] ha-132883-m04 status: &{Name:ha-132883-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (11.51s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.74s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.74s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (38.31s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-132883 node start m02 --alsologtostderr -v 5
E1013 13:47:35.421185  849401 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-845765/.minikube/profiles/functional-574138/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 13:47:35.427573  849401 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-845765/.minikube/profiles/functional-574138/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 13:47:35.438958  849401 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-845765/.minikube/profiles/functional-574138/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 13:47:35.460484  849401 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-845765/.minikube/profiles/functional-574138/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 13:47:35.501882  849401 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-845765/.minikube/profiles/functional-574138/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 13:47:35.583752  849401 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-845765/.minikube/profiles/functional-574138/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 13:47:35.745408  849401 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-845765/.minikube/profiles/functional-574138/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 13:47:36.067662  849401 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-845765/.minikube/profiles/functional-574138/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 13:47:36.709253  849401 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-845765/.minikube/profiles/functional-574138/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 13:47:37.990829  849401 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-845765/.minikube/profiles/functional-574138/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 13:47:40.552491  849401 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-845765/.minikube/profiles/functional-574138/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 13:47:45.674641  849401 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-845765/.minikube/profiles/functional-574138/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-132883 node start m02 --alsologtostderr -v 5: (37.261670228s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-132883 status --alsologtostderr -v 5
E1013 13:47:55.916679  849401 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-845765/.minikube/profiles/functional-574138/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (38.31s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.95s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.95s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (177.65s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 -p ha-132883 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 -p ha-132883 stop --alsologtostderr -v 5
E1013 13:48:16.398229  849401 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-845765/.minikube/profiles/functional-574138/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 -p ha-132883 stop --alsologtostderr -v 5: (33.438981435s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 -p ha-132883 start --wait true --alsologtostderr -v 5
E1013 13:48:57.359629  849401 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-845765/.minikube/profiles/functional-574138/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 13:50:19.281285  849401 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-845765/.minikube/profiles/functional-574138/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 -p ha-132883 start --wait true --alsologtostderr -v 5: (2m24.095930479s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 -p ha-132883 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (177.65s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (9.56s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-132883 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-132883 node delete m03 --alsologtostderr -v 5: (8.750342384s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-132883 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (9.56s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.7s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.70s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (32.32s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-132883 stop --alsologtostderr -v 5
E1013 13:51:30.453934  849401 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-845765/.minikube/profiles/addons-789670/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-132883 stop --alsologtostderr -v 5: (32.215540228s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-132883 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-132883 status --alsologtostderr -v 5: exit status 7 (99.535134ms)

                                                
                                                
-- stdout --
	ha-132883
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-132883-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-132883-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1013 13:51:37.104163  961727 out.go:360] Setting OutFile to fd 1 ...
	I1013 13:51:37.104278  961727 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 13:51:37.104290  961727 out.go:374] Setting ErrFile to fd 2...
	I1013 13:51:37.104298  961727 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 13:51:37.104529  961727 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21724-845765/.minikube/bin
	I1013 13:51:37.104700  961727 out.go:368] Setting JSON to false
	I1013 13:51:37.104731  961727 mustload.go:65] Loading cluster: ha-132883
	I1013 13:51:37.104861  961727 notify.go:220] Checking for updates...
	I1013 13:51:37.105239  961727 config.go:182] Loaded profile config "ha-132883": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
	I1013 13:51:37.105263  961727 status.go:174] checking status of ha-132883 ...
	I1013 13:51:37.105768  961727 cli_runner.go:164] Run: docker container inspect ha-132883 --format={{.State.Status}}
	I1013 13:51:37.123039  961727 status.go:371] ha-132883 host status = "Stopped" (err=<nil>)
	I1013 13:51:37.123057  961727 status.go:384] host is not running, skipping remaining checks
	I1013 13:51:37.123062  961727 status.go:176] ha-132883 status: &{Name:ha-132883 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1013 13:51:37.123112  961727 status.go:174] checking status of ha-132883-m02 ...
	I1013 13:51:37.123358  961727 cli_runner.go:164] Run: docker container inspect ha-132883-m02 --format={{.State.Status}}
	I1013 13:51:37.139989  961727 status.go:371] ha-132883-m02 host status = "Stopped" (err=<nil>)
	I1013 13:51:37.140025  961727 status.go:384] host is not running, skipping remaining checks
	I1013 13:51:37.140035  961727 status.go:176] ha-132883-m02 status: &{Name:ha-132883-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1013 13:51:37.140056  961727 status.go:174] checking status of ha-132883-m04 ...
	I1013 13:51:37.140331  961727 cli_runner.go:164] Run: docker container inspect ha-132883-m04 --format={{.State.Status}}
	I1013 13:51:37.156777  961727 status.go:371] ha-132883-m04 host status = "Stopped" (err=<nil>)
	I1013 13:51:37.156797  961727 status.go:384] host is not running, skipping remaining checks
	I1013 13:51:37.156803  961727 status.go:176] ha-132883-m04 status: &{Name:ha-132883-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (32.32s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (106.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 -p ha-132883 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=docker
E1013 13:52:35.420988  849401 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-845765/.minikube/profiles/functional-574138/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 13:53:03.123286  849401 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-845765/.minikube/profiles/functional-574138/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 -p ha-132883 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=docker: (1m45.261727863s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-132883 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (106.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.73s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.73s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (41.53s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 -p ha-132883 node add --control-plane --alsologtostderr -v 5
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 -p ha-132883 node add --control-plane --alsologtostderr -v 5: (40.617082932s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-132883 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (41.53s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.93s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.93s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (26.22s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -p image-364433 --driver=docker  --container-runtime=docker
image_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -p image-364433 --driver=docker  --container-runtime=docker: (26.221713151s)
--- PASS: TestImageBuild/serial/Setup (26.22s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (1.08s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-364433
image_test.go:78: (dbg) Done: out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-364433: (1.074910651s)
--- PASS: TestImageBuild/serial/NormalBuild (1.08s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (0.66s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-364433
--- PASS: TestImageBuild/serial/BuildWithBuildArg (0.66s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (0.46s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-364433
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (0.46s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.48s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-364433
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.48s)

                                                
                                    
x
+
TestJSONOutput/start/Command (62.35s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-097158 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=docker
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-097158 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=docker: (1m2.344287212s)
--- PASS: TestJSONOutput/start/Command (62.35s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.5s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-097158 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.50s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.47s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-097158 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.47s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.75s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-097158 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-097158 --output=json --user=testUser: (5.746606867s)
--- PASS: TestJSONOutput/stop/Command (5.75s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.22s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-413584 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-413584 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (71.072105ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"86e15a6f-3c3f-4b9c-a138-4d028f39562c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-413584] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"97b3fd6d-6d2f-48be-8731-804238b98f28","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21724"}}
	{"specversion":"1.0","id":"fba35918-1733-4cb3-bca3-8a23db139ac5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"ab089ea6-79fc-4b3f-877f-4bc1b96b79da","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21724-845765/kubeconfig"}}
	{"specversion":"1.0","id":"f54ed903-67ce-422e-a5e7-8d12ec28197d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21724-845765/.minikube"}}
	{"specversion":"1.0","id":"398fa8e5-80c8-4940-9c53-0d9eb9b01f0b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"9f175b68-4018-47ce-9349-4469eb305107","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"74656e5a-a213-4c80-b654-5252a0eaeeaa","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-413584" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-413584
--- PASS: TestErrorJSONOutput (0.22s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (24.43s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-905012 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-905012 --network=: (22.233468774s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-905012" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-905012
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-905012: (2.175074964s)
--- PASS: TestKicCustomNetwork/create_custom_network (24.43s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (24.05s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-618342 --network=bridge
E1013 13:56:30.455308  849401 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-845765/.minikube/profiles/addons-789670/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-618342 --network=bridge: (22.092941981s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-618342" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-618342
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-618342: (1.937394406s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (24.05s)

                                                
                                    
x
+
TestKicExistingNetwork (24.91s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I1013 13:56:44.954663  849401 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1013 13:56:44.972397  849401 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1013 13:56:44.972477  849401 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I1013 13:56:44.972495  849401 cli_runner.go:164] Run: docker network inspect existing-network
W1013 13:56:44.987646  849401 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I1013 13:56:44.987677  849401 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I1013 13:56:44.987697  849401 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I1013 13:56:44.987829  849401 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1013 13:56:45.003398  849401 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-ef0be46c41b2 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:86:64:18:f7:35:96} reservation:<nil>}
I1013 13:56:45.003829  849401 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0018a6100}
I1013 13:56:45.003851  849401 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I1013 13:56:45.003899  849401 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I1013 13:56:45.058022  849401 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-amd64 start -p existing-network-045764 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-amd64 start -p existing-network-045764 --network=existing-network: (22.844018605s)
helpers_test.go:175: Cleaning up "existing-network-045764" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p existing-network-045764
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p existing-network-045764: (1.927855223s)
I1013 13:57:09.847527  849401 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (24.91s)

                                                
                                    
x
+
TestKicCustomSubnet (24.19s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-subnet-369019 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-subnet-369019 --subnet=192.168.60.0/24: (22.057432428s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-369019 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-369019" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p custom-subnet-369019
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p custom-subnet-369019: (2.112238138s)
--- PASS: TestKicCustomSubnet (24.19s)

                                                
                                    
x
+
TestKicStaticIP (26.98s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-amd64 start -p static-ip-045943 --static-ip=192.168.200.200
E1013 13:57:35.421520  849401 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-845765/.minikube/profiles/functional-574138/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 13:57:53.520264  849401 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-845765/.minikube/profiles/addons-789670/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-amd64 start -p static-ip-045943 --static-ip=192.168.200.200: (24.700074697s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-amd64 -p static-ip-045943 ip
helpers_test.go:175: Cleaning up "static-ip-045943" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p static-ip-045943
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p static-ip-045943: (2.137815403s)
--- PASS: TestKicStaticIP (26.98s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (52.28s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-139188 --driver=docker  --container-runtime=docker
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-139188 --driver=docker  --container-runtime=docker: (24.012768123s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-141988 --driver=docker  --container-runtime=docker
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-141988 --driver=docker  --container-runtime=docker: (22.789625646s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-139188
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-141988
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-141988" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-141988
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p second-141988: (2.109674793s)
helpers_test.go:175: Cleaning up "first-139188" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-139188
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-139188: (2.16239556s)
--- PASS: TestMinikubeProfile (52.28s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (9.03s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-589672 --memory=3072 --mount-string /tmp/TestMountStartserial2517099850/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-589672 --memory=3072 --mount-string /tmp/TestMountStartserial2517099850/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker: (8.031176781s)
--- PASS: TestMountStart/serial/StartWithMountFirst (9.03s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.29s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-589672 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.29s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (11.52s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-607297 --memory=3072 --mount-string /tmp/TestMountStartserial2517099850/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-607297 --memory=3072 --mount-string /tmp/TestMountStartserial2517099850/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker: (10.519289047s)
--- PASS: TestMountStart/serial/StartWithMountSecond (11.52s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.29s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-607297 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.29s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.55s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-589672 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p mount-start-1-589672 --alsologtostderr -v=5: (1.550785307s)
--- PASS: TestMountStart/serial/DeleteFirst (1.55s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-607297 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.28s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.21s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-607297
mount_start_test.go:196: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-607297: (1.208560349s)
--- PASS: TestMountStart/serial/Stop (1.21s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (9.33s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-607297
mount_start_test.go:207: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-607297: (8.325685117s)
--- PASS: TestMountStart/serial/RestartStopped (9.33s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.29s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-607297 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.29s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (91.05s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-542745 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=docker
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-542745 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=docker: (1m30.544034971s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-542745 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (91.05s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (4.16s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-542745 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-542745 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-542745 -- rollout status deployment/busybox: (2.684328559s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-542745 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-542745 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-542745 -- exec busybox-7b57f96db7-6jf6r -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-542745 -- exec busybox-7b57f96db7-9hc9z -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-542745 -- exec busybox-7b57f96db7-6jf6r -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-542745 -- exec busybox-7b57f96db7-9hc9z -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-542745 -- exec busybox-7b57f96db7-6jf6r -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-542745 -- exec busybox-7b57f96db7-9hc9z -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (4.16s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.82s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-542745 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-542745 -- exec busybox-7b57f96db7-6jf6r -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-542745 -- exec busybox-7b57f96db7-6jf6r -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-542745 -- exec busybox-7b57f96db7-9hc9z -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-542745 -- exec busybox-7b57f96db7-9hc9z -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.82s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (32s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-542745 -v=5 --alsologtostderr
E1013 14:01:30.453423  849401 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-845765/.minikube/profiles/addons-789670/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-542745 -v=5 --alsologtostderr: (31.304728621s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-542745 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (32.00s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-542745 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.07s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.71s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.71s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10.03s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-542745 status --output json --alsologtostderr
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-542745 cp testdata/cp-test.txt multinode-542745:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-542745 ssh -n multinode-542745 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-542745 cp multinode-542745:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3958555179/001/cp-test_multinode-542745.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-542745 ssh -n multinode-542745 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-542745 cp multinode-542745:/home/docker/cp-test.txt multinode-542745-m02:/home/docker/cp-test_multinode-542745_multinode-542745-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-542745 ssh -n multinode-542745 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-542745 ssh -n multinode-542745-m02 "sudo cat /home/docker/cp-test_multinode-542745_multinode-542745-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-542745 cp multinode-542745:/home/docker/cp-test.txt multinode-542745-m03:/home/docker/cp-test_multinode-542745_multinode-542745-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-542745 ssh -n multinode-542745 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-542745 ssh -n multinode-542745-m03 "sudo cat /home/docker/cp-test_multinode-542745_multinode-542745-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-542745 cp testdata/cp-test.txt multinode-542745-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-542745 ssh -n multinode-542745-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-542745 cp multinode-542745-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3958555179/001/cp-test_multinode-542745-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-542745 ssh -n multinode-542745-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-542745 cp multinode-542745-m02:/home/docker/cp-test.txt multinode-542745:/home/docker/cp-test_multinode-542745-m02_multinode-542745.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-542745 ssh -n multinode-542745-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-542745 ssh -n multinode-542745 "sudo cat /home/docker/cp-test_multinode-542745-m02_multinode-542745.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-542745 cp multinode-542745-m02:/home/docker/cp-test.txt multinode-542745-m03:/home/docker/cp-test_multinode-542745-m02_multinode-542745-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-542745 ssh -n multinode-542745-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-542745 ssh -n multinode-542745-m03 "sudo cat /home/docker/cp-test_multinode-542745-m02_multinode-542745-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-542745 cp testdata/cp-test.txt multinode-542745-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-542745 ssh -n multinode-542745-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-542745 cp multinode-542745-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3958555179/001/cp-test_multinode-542745-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-542745 ssh -n multinode-542745-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-542745 cp multinode-542745-m03:/home/docker/cp-test.txt multinode-542745:/home/docker/cp-test_multinode-542745-m03_multinode-542745.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-542745 ssh -n multinode-542745-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-542745 ssh -n multinode-542745 "sudo cat /home/docker/cp-test_multinode-542745-m03_multinode-542745.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-542745 cp multinode-542745-m03:/home/docker/cp-test.txt multinode-542745-m02:/home/docker/cp-test_multinode-542745-m03_multinode-542745-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-542745 ssh -n multinode-542745-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-542745 ssh -n multinode-542745-m02 "sudo cat /home/docker/cp-test_multinode-542745-m03_multinode-542745-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (10.03s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.24s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-542745 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-542745 node stop m03: (1.244479269s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-542745 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-542745 status: exit status 7 (494.712354ms)

                                                
                                                
-- stdout --
	multinode-542745
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-542745-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-542745-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-542745 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-542745 status --alsologtostderr: exit status 7 (500.581139ms)

                                                
                                                
-- stdout --
	multinode-542745
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-542745-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-542745-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1013 14:01:49.521551 1044534 out.go:360] Setting OutFile to fd 1 ...
	I1013 14:01:49.521822 1044534 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 14:01:49.521833 1044534 out.go:374] Setting ErrFile to fd 2...
	I1013 14:01:49.521837 1044534 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 14:01:49.522083 1044534 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21724-845765/.minikube/bin
	I1013 14:01:49.522320 1044534 out.go:368] Setting JSON to false
	I1013 14:01:49.522355 1044534 mustload.go:65] Loading cluster: multinode-542745
	I1013 14:01:49.522466 1044534 notify.go:220] Checking for updates...
	I1013 14:01:49.522835 1044534 config.go:182] Loaded profile config "multinode-542745": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
	I1013 14:01:49.522853 1044534 status.go:174] checking status of multinode-542745 ...
	I1013 14:01:49.523318 1044534 cli_runner.go:164] Run: docker container inspect multinode-542745 --format={{.State.Status}}
	I1013 14:01:49.542044 1044534 status.go:371] multinode-542745 host status = "Running" (err=<nil>)
	I1013 14:01:49.542080 1044534 host.go:66] Checking if "multinode-542745" exists ...
	I1013 14:01:49.542351 1044534 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-542745
	I1013 14:01:49.559713 1044534 host.go:66] Checking if "multinode-542745" exists ...
	I1013 14:01:49.559953 1044534 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1013 14:01:49.559991 1044534 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-542745
	I1013 14:01:49.576417 1044534 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33283 SSHKeyPath:/home/jenkins/minikube-integration/21724-845765/.minikube/machines/multinode-542745/id_rsa Username:docker}
	I1013 14:01:49.679054 1044534 ssh_runner.go:195] Run: systemctl --version
	I1013 14:01:49.685563 1044534 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1013 14:01:49.699011 1044534 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1013 14:01:49.757149 1044534 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:false NGoroutines:65 SystemTime:2025-10-13 14:01:49.747595903 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1013 14:01:49.757765 1044534 kubeconfig.go:125] found "multinode-542745" server: "https://192.168.67.2:8443"
	I1013 14:01:49.757806 1044534 api_server.go:166] Checking apiserver status ...
	I1013 14:01:49.757860 1044534 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1013 14:01:49.770887 1044534 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2116/cgroup
	W1013 14:01:49.779277 1044534 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2116/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1013 14:01:49.779328 1044534 ssh_runner.go:195] Run: ls
	I1013 14:01:49.782854 1044534 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1013 14:01:49.788666 1044534 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I1013 14:01:49.788691 1044534 status.go:463] multinode-542745 apiserver status = Running (err=<nil>)
	I1013 14:01:49.788704 1044534 status.go:176] multinode-542745 status: &{Name:multinode-542745 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1013 14:01:49.788731 1044534 status.go:174] checking status of multinode-542745-m02 ...
	I1013 14:01:49.788991 1044534 cli_runner.go:164] Run: docker container inspect multinode-542745-m02 --format={{.State.Status}}
	I1013 14:01:49.806117 1044534 status.go:371] multinode-542745-m02 host status = "Running" (err=<nil>)
	I1013 14:01:49.806143 1044534 host.go:66] Checking if "multinode-542745-m02" exists ...
	I1013 14:01:49.806413 1044534 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-542745-m02
	I1013 14:01:49.823732 1044534 host.go:66] Checking if "multinode-542745-m02" exists ...
	I1013 14:01:49.824069 1044534 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1013 14:01:49.824177 1044534 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-542745-m02
	I1013 14:01:49.841511 1044534 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33288 SSHKeyPath:/home/jenkins/minikube-integration/21724-845765/.minikube/machines/multinode-542745-m02/id_rsa Username:docker}
	I1013 14:01:49.941159 1044534 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1013 14:01:49.953426 1044534 status.go:176] multinode-542745-m02 status: &{Name:multinode-542745-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1013 14:01:49.953466 1044534 status.go:174] checking status of multinode-542745-m03 ...
	I1013 14:01:49.953751 1044534 cli_runner.go:164] Run: docker container inspect multinode-542745-m03 --format={{.State.Status}}
	I1013 14:01:49.971173 1044534 status.go:371] multinode-542745-m03 host status = "Stopped" (err=<nil>)
	I1013 14:01:49.971195 1044534 status.go:384] host is not running, skipping remaining checks
	I1013 14:01:49.971202 1044534 status.go:176] multinode-542745-m03 status: &{Name:multinode-542745-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.24s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (8.75s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-542745 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-542745 node start m03 -v=5 --alsologtostderr: (8.04863677s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-542745 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (8.75s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (79.8s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-542745
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-542745
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-542745: (22.63265723s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-542745 --wait=true -v=5 --alsologtostderr
E1013 14:02:35.421330  849401 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-845765/.minikube/profiles/functional-574138/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-542745 --wait=true -v=5 --alsologtostderr: (57.05986558s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-542745
--- PASS: TestMultiNode/serial/RestartKeepsNodes (79.80s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.3s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-542745 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-542745 node delete m03: (4.647823114s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-542745 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.30s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (21.7s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-542745 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-542745 stop: (21.530402743s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-542745 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-542745 status: exit status 7 (87.698231ms)

                                                
                                                
-- stdout --
	multinode-542745
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-542745-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-542745 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-542745 status --alsologtostderr: exit status 7 (85.558014ms)

                                                
                                                
-- stdout --
	multinode-542745
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-542745-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1013 14:03:45.481368 1059320 out.go:360] Setting OutFile to fd 1 ...
	I1013 14:03:45.481596 1059320 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 14:03:45.481603 1059320 out.go:374] Setting ErrFile to fd 2...
	I1013 14:03:45.481607 1059320 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 14:03:45.481825 1059320 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21724-845765/.minikube/bin
	I1013 14:03:45.481986 1059320 out.go:368] Setting JSON to false
	I1013 14:03:45.482018 1059320 mustload.go:65] Loading cluster: multinode-542745
	I1013 14:03:45.482113 1059320 notify.go:220] Checking for updates...
	I1013 14:03:45.482618 1059320 config.go:182] Loaded profile config "multinode-542745": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
	I1013 14:03:45.482640 1059320 status.go:174] checking status of multinode-542745 ...
	I1013 14:03:45.483322 1059320 cli_runner.go:164] Run: docker container inspect multinode-542745 --format={{.State.Status}}
	I1013 14:03:45.502025 1059320 status.go:371] multinode-542745 host status = "Stopped" (err=<nil>)
	I1013 14:03:45.502077 1059320 status.go:384] host is not running, skipping remaining checks
	I1013 14:03:45.502105 1059320 status.go:176] multinode-542745 status: &{Name:multinode-542745 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1013 14:03:45.502148 1059320 status.go:174] checking status of multinode-542745-m02 ...
	I1013 14:03:45.502524 1059320 cli_runner.go:164] Run: docker container inspect multinode-542745-m02 --format={{.State.Status}}
	I1013 14:03:45.519115 1059320 status.go:371] multinode-542745-m02 host status = "Stopped" (err=<nil>)
	I1013 14:03:45.519134 1059320 status.go:384] host is not running, skipping remaining checks
	I1013 14:03:45.519139 1059320 status.go:176] multinode-542745-m02 status: &{Name:multinode-542745-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (21.70s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (51.5s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-542745 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=docker
E1013 14:03:58.485331  849401 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-845765/.minikube/profiles/functional-574138/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-542745 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=docker: (50.858894158s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-542745 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (51.50s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (25.85s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-542745
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-542745-m02 --driver=docker  --container-runtime=docker
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-542745-m02 --driver=docker  --container-runtime=docker: exit status 14 (66.114933ms)

                                                
                                                
-- stdout --
	* [multinode-542745-m02] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21724
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21724-845765/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21724-845765/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-542745-m02' is duplicated with machine name 'multinode-542745-m02' in profile 'multinode-542745'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-542745-m03 --driver=docker  --container-runtime=docker
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-542745-m03 --driver=docker  --container-runtime=docker: (23.295572966s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-542745
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-542745: exit status 80 (289.526546ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-542745 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-542745-m03 already exists in multinode-542745-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_1.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-542745-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-542745-m03: (2.140825097s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (25.85s)

                                                
                                    
x
+
TestPreload (110.28s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:43: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-319116 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.32.0
preload_test.go:43: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-319116 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.32.0: (44.722854798s)
preload_test.go:51: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-319116 image pull gcr.io/k8s-minikube/busybox
preload_test.go:51: (dbg) Done: out/minikube-linux-amd64 -p test-preload-319116 image pull gcr.io/k8s-minikube/busybox: (2.259176034s)
preload_test.go:57: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-319116
preload_test.go:57: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-319116: (5.684008613s)
preload_test.go:65: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-319116 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=docker
E1013 14:06:30.453348  849401 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-845765/.minikube/profiles/addons-789670/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:65: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-319116 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=docker: (55.17517073s)
preload_test.go:70: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-319116 image list
helpers_test.go:175: Cleaning up "test-preload-319116" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-319116
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-319116: (2.22011288s)
--- PASS: TestPreload (110.28s)

                                                
                                    
x
+
TestScheduledStopUnix (95.6s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-902075 --memory=3072 --driver=docker  --container-runtime=docker
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-902075 --memory=3072 --driver=docker  --container-runtime=docker: (22.516261298s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-902075 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-902075 -n scheduled-stop-902075
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-902075 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I1013 14:07:20.082645  849401 retry.go:31] will retry after 134.411µs: open /home/jenkins/minikube-integration/21724-845765/.minikube/profiles/scheduled-stop-902075/pid: no such file or directory
I1013 14:07:20.083843  849401 retry.go:31] will retry after 172.511µs: open /home/jenkins/minikube-integration/21724-845765/.minikube/profiles/scheduled-stop-902075/pid: no such file or directory
I1013 14:07:20.084986  849401 retry.go:31] will retry after 278.053µs: open /home/jenkins/minikube-integration/21724-845765/.minikube/profiles/scheduled-stop-902075/pid: no such file or directory
I1013 14:07:20.086150  849401 retry.go:31] will retry after 461.77µs: open /home/jenkins/minikube-integration/21724-845765/.minikube/profiles/scheduled-stop-902075/pid: no such file or directory
I1013 14:07:20.087300  849401 retry.go:31] will retry after 579.539µs: open /home/jenkins/minikube-integration/21724-845765/.minikube/profiles/scheduled-stop-902075/pid: no such file or directory
I1013 14:07:20.088458  849401 retry.go:31] will retry after 1.112016ms: open /home/jenkins/minikube-integration/21724-845765/.minikube/profiles/scheduled-stop-902075/pid: no such file or directory
I1013 14:07:20.090695  849401 retry.go:31] will retry after 1.389636ms: open /home/jenkins/minikube-integration/21724-845765/.minikube/profiles/scheduled-stop-902075/pid: no such file or directory
I1013 14:07:20.092924  849401 retry.go:31] will retry after 940.254µs: open /home/jenkins/minikube-integration/21724-845765/.minikube/profiles/scheduled-stop-902075/pid: no such file or directory
I1013 14:07:20.094051  849401 retry.go:31] will retry after 1.420885ms: open /home/jenkins/minikube-integration/21724-845765/.minikube/profiles/scheduled-stop-902075/pid: no such file or directory
I1013 14:07:20.096215  849401 retry.go:31] will retry after 2.603616ms: open /home/jenkins/minikube-integration/21724-845765/.minikube/profiles/scheduled-stop-902075/pid: no such file or directory
I1013 14:07:20.099438  849401 retry.go:31] will retry after 3.59812ms: open /home/jenkins/minikube-integration/21724-845765/.minikube/profiles/scheduled-stop-902075/pid: no such file or directory
I1013 14:07:20.103655  849401 retry.go:31] will retry after 9.118198ms: open /home/jenkins/minikube-integration/21724-845765/.minikube/profiles/scheduled-stop-902075/pid: no such file or directory
I1013 14:07:20.113927  849401 retry.go:31] will retry after 11.407939ms: open /home/jenkins/minikube-integration/21724-845765/.minikube/profiles/scheduled-stop-902075/pid: no such file or directory
I1013 14:07:20.126185  849401 retry.go:31] will retry after 12.854944ms: open /home/jenkins/minikube-integration/21724-845765/.minikube/profiles/scheduled-stop-902075/pid: no such file or directory
I1013 14:07:20.139425  849401 retry.go:31] will retry after 25.981974ms: open /home/jenkins/minikube-integration/21724-845765/.minikube/profiles/scheduled-stop-902075/pid: no such file or directory
I1013 14:07:20.165706  849401 retry.go:31] will retry after 38.341709ms: open /home/jenkins/minikube-integration/21724-845765/.minikube/profiles/scheduled-stop-902075/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-902075 --cancel-scheduled
E1013 14:07:35.421305  849401 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-845765/.minikube/profiles/functional-574138/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-902075 -n scheduled-stop-902075
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-902075
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-902075 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-902075
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-902075: exit status 7 (70.024723ms)

                                                
                                                
-- stdout --
	scheduled-stop-902075
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-902075 -n scheduled-stop-902075
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-902075 -n scheduled-stop-902075: exit status 7 (68.355048ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-902075" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-902075
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p scheduled-stop-902075: (1.6759166s)
--- PASS: TestScheduledStopUnix (95.60s)

                                                
                                    
x
+
TestInsufficientStorage (10.65s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-amd64 start -p insufficient-storage-097658 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=docker
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p insufficient-storage-097658 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=docker: exit status 26 (8.371122042s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"6658aebf-13d6-4fe8-b54f-d41f5cc78f7d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-097658] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"d73044f6-7497-49b4-98c3-505ddf5f5c66","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21724"}}
	{"specversion":"1.0","id":"a6149737-eb94-4b24-ad76-f7935c2c9f8d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"81c056b6-5f4f-4812-bf0f-53374b911129","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21724-845765/kubeconfig"}}
	{"specversion":"1.0","id":"192cdb15-1f3e-462e-8dd5-e37542367e97","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21724-845765/.minikube"}}
	{"specversion":"1.0","id":"8d7fefcd-475c-448e-88e7-725d3358ebfa","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"b6714f34-b58e-4406-ae37-9e5d89de36d7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"514cf889-0765-46a8-8c62-6bde39e483ee","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"8abfde8e-c5a5-41aa-8628-df220273ced9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"b3a77c63-f17b-476c-9f78-336976269609","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"8de6f134-c8c3-490e-8789-6bf9a8398d95","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"76a39ff4-05f8-4175-9dc8-e9a3301493c7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-097658\" primary control-plane node in \"insufficient-storage-097658\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"c810abb6-8dd9-4868-9a73-c09a0980f814","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.48-1759745255-21703 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"fd12246b-7031-484a-a821-bdf41cbc11f7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=3072MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"c12baded-7817-4396-bd33-c8941622e10c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-097658 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-097658 --output=json --layout=cluster: exit status 7 (286.418908ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-097658","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=3072MB) ...","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-097658","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1013 14:09:18.830234 1093482 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-097658" does not appear in /home/jenkins/minikube-integration/21724-845765/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-097658 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-097658 --output=json --layout=cluster: exit status 7 (286.968561ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-097658","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-097658","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1013 14:09:19.118268 1093593 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-097658" does not appear in /home/jenkins/minikube-integration/21724-845765/kubeconfig
	E1013 14:09:19.128772 1093593 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/21724-845765/.minikube/profiles/insufficient-storage-097658/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-097658" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p insufficient-storage-097658
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p insufficient-storage-097658: (1.705081076s)
--- PASS: TestInsufficientStorage (10.65s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (54.43s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.32.0.966001689 start -p running-upgrade-250056 --memory=3072 --vm-driver=docker  --container-runtime=docker
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.32.0.966001689 start -p running-upgrade-250056 --memory=3072 --vm-driver=docker  --container-runtime=docker: (24.676242746s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-250056 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-250056 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (24.932726645s)
helpers_test.go:175: Cleaning up "running-upgrade-250056" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-250056
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-250056: (2.225495549s)
--- PASS: TestRunningBinaryUpgrade (54.43s)

                                                
                                    
x
+
TestKubernetesUpgrade (338.14s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-886788 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-886788 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (30.536822105s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-886788
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-886788: (1.887437401s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-886788 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-886788 status --format={{.Host}}: exit status 7 (73.310027ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-886788 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-886788 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (4m28.113300691s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-886788 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-886788 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=docker
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-886788 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=docker: exit status 106 (87.732448ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-886788] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21724
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21724-845765/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21724-845765/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.34.1 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-886788
	    minikube start -p kubernetes-upgrade-886788 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-8867882 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.34.1, by running:
	    
	    minikube start -p kubernetes-upgrade-886788 --kubernetes-version=v1.34.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-886788 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-886788 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (34.911513496s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-886788" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-886788
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-886788: (2.453542955s)
--- PASS: TestKubernetesUpgrade (338.14s)

                                                
                                    
x
+
TestMissingContainerUpgrade (102.84s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.32.0.3123849072 start -p missing-upgrade-944791 --memory=3072 --driver=docker  --container-runtime=docker
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.32.0.3123849072 start -p missing-upgrade-944791 --memory=3072 --driver=docker  --container-runtime=docker: (49.734251434s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-944791
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-944791: (1.639806237s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-944791
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-amd64 start -p missing-upgrade-944791 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-amd64 start -p missing-upgrade-944791 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (46.534743353s)
helpers_test.go:175: Cleaning up "missing-upgrade-944791" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p missing-upgrade-944791
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p missing-upgrade-944791: (2.31308105s)
--- PASS: TestMissingContainerUpgrade (102.84s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (4.4s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (4.40s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (51.58s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.32.0.1170327568 start -p stopped-upgrade-810508 --memory=3072 --vm-driver=docker  --container-runtime=docker
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.32.0.1170327568 start -p stopped-upgrade-810508 --memory=3072 --vm-driver=docker  --container-runtime=docker: (23.300731754s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.32.0.1170327568 -p stopped-upgrade-810508 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.32.0.1170327568 -p stopped-upgrade-810508 stop: (10.75336979s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-810508 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
E1013 14:11:30.452692  849401 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-845765/.minikube/profiles/addons-789670/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-810508 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (17.524624785s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (51.58s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.85s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-810508
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.85s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.07s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:85: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-860016 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:85: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-860016 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=docker: exit status 14 (68.197336ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-860016] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21724
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21724-845765/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21724-845765/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.07s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (24.4s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:97: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-860016 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:97: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-860016 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (24.052929273s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-860016 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (24.40s)

                                                
                                    
x
+
TestPause/serial/Start (64.51s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-884030 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=docker
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-884030 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=docker: (1m4.514418345s)
--- PASS: TestPause/serial/Start (64.51s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (18.86s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:114: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-860016 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:114: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-860016 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (16.819717249s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-860016 status -o json
no_kubernetes_test.go:202: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-860016 status -o json: exit status 2 (295.283444ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-860016","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:126: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-860016
no_kubernetes_test.go:126: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-860016: (1.743596794s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (18.86s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (7.84s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:138: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-860016 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:138: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-860016 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (7.838435826s)
--- PASS: TestNoKubernetes/serial/Start (7.84s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.27s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-860016 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-860016 "sudo systemctl is-active --quiet service kubelet": exit status 1 (273.361794ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.27s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (31.64s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:171: (dbg) Run:  out/minikube-linux-amd64 profile list
E1013 14:12:35.423743  849401 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-845765/.minikube/profiles/functional-574138/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
no_kubernetes_test.go:171: (dbg) Done: out/minikube-linux-amd64 profile list: (16.446209459s)
no_kubernetes_test.go:181: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
no_kubernetes_test.go:181: (dbg) Done: out/minikube-linux-amd64 profile list --output=json: (15.192390511s)
--- PASS: TestNoKubernetes/serial/ProfileList (31.64s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.21s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:160: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-860016
no_kubernetes_test.go:160: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-860016: (1.213623477s)
--- PASS: TestNoKubernetes/serial/Stop (1.21s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (8.69s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:193: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-860016 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:193: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-860016 --driver=docker  --container-runtime=docker: (8.688510606s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (8.69s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (46.29s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-884030 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-884030 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (46.266305713s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (46.29s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-860016 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-860016 "sudo systemctl is-active --quiet service kubelet": exit status 1 (275.782084ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (65.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-196580 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-196580 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=docker: (1m5.302879024s)
--- PASS: TestNetworkPlugins/group/auto/Start (65.30s)

                                                
                                    
x
+
TestPause/serial/Pause (0.66s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-884030 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.66s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.38s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-884030 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-884030 --output=json --layout=cluster: exit status 2 (379.328052ms)

                                                
                                                
-- stdout --
	{"Name":"pause-884030","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 12 containers in: kube-system, kubernetes-dashboard, istio-operator","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-884030","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.38s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.5s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-884030 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.50s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.69s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-884030 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.69s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.22s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-884030 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-884030 --alsologtostderr -v=5: (2.224111876s)
--- PASS: TestPause/serial/DeletePaused (2.22s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (13.12s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
pause_test.go:142: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (13.048980985s)
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-884030
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-884030: exit status 1 (22.629946ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-884030: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (13.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (41.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-196580 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-196580 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=docker: (41.146304427s)
--- PASS: TestNetworkPlugins/group/flannel/Start (41.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (71.63s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-196580 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=docker
E1013 14:14:33.522432  849401 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-845765/.minikube/profiles/addons-789670/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-196580 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=docker: (1m11.628952356s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (71.63s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:352: "kube-flannel-ds-lt8hv" [29f86524-29d2-481f-9b35-8af9fea6e632] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.003975352s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-196580 "pgrep -a kubelet"
I1013 14:14:49.692488  849401 config.go:182] Loaded profile config "auto-196580": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (10.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-196580 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-vkssh" [1f7e99c0-c6c3-4423-a1c0-7d5d7a98b015] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-vkssh" [1f7e99c0-c6c3-4423-a1c0-7d5d7a98b015] Running
I1013 14:14:54.123588  849401 config.go:182] Loaded profile config "flannel-196580": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 10.003873992s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (10.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-196580 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (8.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-196580 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-9j9x5" [2313bc6c-688c-4fcb-a43e-34003bdd71a6] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-9j9x5" [2313bc6c-688c-4fcb-a43e-34003bdd71a6] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 8.004098068s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (8.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-196580 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-196580 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-196580 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-196580 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-196580 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-196580 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (66.87s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-196580 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-196580 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=docker: (1m6.865464262s)
--- PASS: TestNetworkPlugins/group/bridge/Start (66.87s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-196580 "pgrep -a kubelet"
I1013 14:15:23.395247  849401 config.go:182] Loaded profile config "enable-default-cni-196580": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.51s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-196580 replace --force -f testdata/netcat-deployment.yaml
I1013 14:15:23.877757  849401 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 0 spec.replicas 1 status.replicas 0
I1013 14:15:23.880794  849401 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-h6s64" [e87b5511-de6e-4bc0-a29b-de8337ba3280] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-h6s64" [e87b5511-de6e-4bc0-a29b-de8337ba3280] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 11.004179123s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.51s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (43.78s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kubenet-196580 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kubenet-196580 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker  --container-runtime=docker: (43.784109743s)
--- PASS: TestNetworkPlugins/group/kubenet/Start (43.78s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-196580 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-196580 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-196580 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (48.66s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-196580 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-196580 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=docker: (48.655203882s)
--- PASS: TestNetworkPlugins/group/calico/Start (48.66s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kubenet-196580 "pgrep -a kubelet"
I1013 14:16:09.194681  849401 config.go:182] Loaded profile config "kubenet-196580": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/kubenet/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/NetCatPod (9.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kubenet-196580 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-jgp8t" [ebebfdc4-c1bb-4051-96f2-ddbe33eb79ca] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-jgp8t" [ebebfdc4-c1bb-4051-96f2-ddbe33eb79ca] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: app=netcat healthy within 9.003803487s
--- PASS: TestNetworkPlugins/group/kubenet/NetCatPod (9.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (55.89s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-196580 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-196580 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=docker: (55.891148992s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (55.89s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kubenet-196580 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kubenet/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kubenet-196580 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kubenet/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kubenet-196580 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kubenet/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-196580 "pgrep -a kubelet"
I1013 14:16:26.833166  849401 config.go:182] Loaded profile config "bridge-196580": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (11.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-196580 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-qr29r" [53130c37-bc9d-486a-8886-f5e121906274] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1013 14:16:30.452407  849401 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-845765/.minikube/profiles/addons-789670/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-qr29r" [53130c37-bc9d-486a-8886-f5e121906274] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 11.003745348s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (11.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-196580 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-196580 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-196580 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (45.69s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-196580 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-196580 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=docker: (45.688439061s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (45.69s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:352: "calico-node-mw4v2" [c7473fc2-d244-40ca-a7d0-c5bcd0d9d8dd] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.004283733s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-196580 "pgrep -a kubelet"
I1013 14:16:51.026136  849401 config.go:182] Loaded profile config "calico-196580": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (10.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-196580 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-mw995" [8b018170-bf24-4d08-a091-17e5f44d1d38] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-mw995" [8b018170-bf24-4d08-a091-17e5f44d1d38] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 10.004662892s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (10.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (67.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p false-196580 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p false-196580 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker  --container-runtime=docker: (1m7.12341861s)
--- PASS: TestNetworkPlugins/group/false/Start (67.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-196580 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-196580 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-196580 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:352: "kindnet-lgskt" [b5108cde-117f-4ab7-a961-2ad417f879d7] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004071382s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-196580 "pgrep -a kubelet"
I1013 14:17:12.171974  849401 config.go:182] Loaded profile config "kindnet-196580": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (9.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-196580 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-gg5fl" [39fc3785-e9c1-465a-970c-8454de4cc7d5] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-gg5fl" [39fc3785-e9c1-465a-970c-8454de4cc7d5] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 9.004785004s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (9.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-196580 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-196580 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-196580 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.12s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (43.29s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-266113 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.28.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-266113 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.28.0: (43.2902343s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (43.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-196580 "pgrep -a kubelet"
I1013 14:17:26.973342  849401 config.go:182] Loaded profile config "custom-flannel-196580": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (10.96s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-196580 replace --force -f testdata/netcat-deployment.yaml
I1013 14:17:27.273066  849401 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 0 spec.replicas 1 status.replicas 0
I1013 14:17:27.562370  849401 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-82vv9" [0cd0e941-f358-4eb6-bdc0-6b3691e04530] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-82vv9" [0cd0e941-f358-4eb6-bdc0-6b3691e04530] Running
E1013 14:17:35.420707  849401 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-845765/.minikube/profiles/functional-574138/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 10.004399497s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (10.96s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-196580 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-196580 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-196580 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.14s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (73.95s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-637171 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-637171 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.1: (1m13.949476135s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (73.95s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (65.89s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-900384 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-900384 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.1: (1m5.887896793s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (65.89s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/KubeletFlags (0.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p false-196580 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/false/KubeletFlags (0.40s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (10.32s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-266113 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [549fcb72-36f2-4398-8605-cd7071869f0a] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
I1013 14:18:06.867586  849401 config.go:182] Loaded profile config "false-196580": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
helpers_test.go:352: "busybox" [549fcb72-36f2-4398-8605-cd7071869f0a] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 10.004027377s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-266113 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (10.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/NetCatPod (9.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context false-196580 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-drvsc" [f4f3a8e2-bf5b-4399-a176-594cdc5d20a7] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-drvsc" [f4f3a8e2-bf5b-4399-a176-594cdc5d20a7] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: app=netcat healthy within 9.004694387s
--- PASS: TestNetworkPlugins/group/false/NetCatPod (9.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/DNS
net_test.go:175: (dbg) Run:  kubectl --context false-196580 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/false/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Localhost
net_test.go:194: (dbg) Run:  kubectl --context false-196580 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/false/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/HairPin
net_test.go:264: (dbg) Run:  kubectl --context false-196580 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/false/HairPin (0.12s)
E1013 14:20:23.829628  849401 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-845765/.minikube/profiles/enable-default-cni-196580/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 14:20:23.836040  849401 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-845765/.minikube/profiles/enable-default-cni-196580/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 14:20:23.847447  849401 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-845765/.minikube/profiles/enable-default-cni-196580/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 14:20:23.868859  849401 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-845765/.minikube/profiles/enable-default-cni-196580/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 14:20:23.910324  849401 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-845765/.minikube/profiles/enable-default-cni-196580/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 14:20:23.991819  849401 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-845765/.minikube/profiles/enable-default-cni-196580/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 14:20:24.153298  849401 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-845765/.minikube/profiles/enable-default-cni-196580/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 14:20:24.475212  849401 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-845765/.minikube/profiles/enable-default-cni-196580/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 14:20:25.116978  849401 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-845765/.minikube/profiles/enable-default-cni-196580/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 14:20:26.398832  849401 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-845765/.minikube/profiles/enable-default-cni-196580/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.87s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-266113 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-266113 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.87s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (10.86s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-266113 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-266113 --alsologtostderr -v=3: (10.862604712s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (10.86s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-266113 -n old-k8s-version-266113
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-266113 -n old-k8s-version-266113: exit status 7 (87.038358ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-266113 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (48s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-266113 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.28.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-266113 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.28.0: (47.675429517s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-266113 -n old-k8s-version-266113
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (48.00s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (40.88s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-815353 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-815353 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.1: (40.87907014s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (40.88s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (11.25s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-637171 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [66f42aa1-3834-4e44-b1d3-dede1341e3b8] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [66f42aa1-3834-4e44-b1d3-dede1341e3b8] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 11.004135338s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-637171 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (11.25s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (8.25s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-900384 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [11595e79-8b45-4b85-819f-051474685e73] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [11595e79-8b45-4b85-819f-051474685e73] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 8.003801275s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-900384 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (8.25s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.82s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-637171 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-637171 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.82s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (10.82s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-637171 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-637171 --alsologtostderr -v=3: (10.820883698s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (10.82s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.78s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-900384 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-900384 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.78s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (10.88s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-900384 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-900384 --alsologtostderr -v=3: (10.884740767s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (10.88s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-rg6zb" [6b1e8c12-f462-4f19-8f5e-167c301a2cf1] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003529596s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.27s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-815353 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [eafdd0c7-dbbf-41e0-9abb-903346b5edc3] Pending
helpers_test.go:352: "busybox" [eafdd0c7-dbbf-41e0-9abb-903346b5edc3] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [eafdd0c7-dbbf-41e0-9abb-903346b5edc3] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 8.003706293s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-815353 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.27s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-637171 -n no-preload-637171
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-637171 -n no-preload-637171: exit status 7 (73.461358ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-637171 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (46.94s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-637171 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-637171 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.1: (46.621731974s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-637171 -n no-preload-637171
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (46.94s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-rg6zb" [6b1e8c12-f462-4f19-8f5e-167c301a2cf1] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.006667076s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-266113 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.82s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-815353 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-815353 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.82s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-815353 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-815353 --alsologtostderr -v=3: (12.0131399s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-900384 -n embed-certs-900384
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-900384 -n embed-certs-900384: exit status 7 (82.683997ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-900384 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (54.97s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-900384 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-900384 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.1: (54.621686536s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-900384 -n embed-certs-900384
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (54.97s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-266113 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.28s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.9s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-266113 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-266113 -n old-k8s-version-266113
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-266113 -n old-k8s-version-266113: exit status 2 (472.280805ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-266113 -n old-k8s-version-266113
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-266113 -n old-k8s-version-266113: exit status 2 (364.701727ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-266113 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-266113 -n old-k8s-version-266113
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-266113 -n old-k8s-version-266113
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.90s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (31.19s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-424269 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-424269 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.1: (31.186638638s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (31.19s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-815353 -n default-k8s-diff-port-815353
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-815353 -n default-k8s-diff-port-815353: exit status 7 (112.801382ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-815353 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.25s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (49.53s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-815353 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.1
E1013 14:19:47.839022  849401 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-845765/.minikube/profiles/flannel-196580/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 14:19:47.845966  849401 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-845765/.minikube/profiles/flannel-196580/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 14:19:47.857424  849401 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-845765/.minikube/profiles/flannel-196580/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 14:19:47.879443  849401 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-845765/.minikube/profiles/flannel-196580/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 14:19:47.921548  849401 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-845765/.minikube/profiles/flannel-196580/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 14:19:48.003706  849401 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-845765/.minikube/profiles/flannel-196580/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 14:19:48.165656  849401 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-845765/.minikube/profiles/flannel-196580/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 14:19:48.487623  849401 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-845765/.minikube/profiles/flannel-196580/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 14:19:49.129058  849401 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-845765/.minikube/profiles/flannel-196580/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 14:19:49.877276  849401 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-845765/.minikube/profiles/auto-196580/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 14:19:49.883693  849401 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-845765/.minikube/profiles/auto-196580/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 14:19:49.895157  849401 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-845765/.minikube/profiles/auto-196580/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 14:19:49.916520  849401 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-845765/.minikube/profiles/auto-196580/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 14:19:49.958312  849401 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-845765/.minikube/profiles/auto-196580/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 14:19:50.039935  849401 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-845765/.minikube/profiles/auto-196580/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 14:19:50.201555  849401 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-845765/.minikube/profiles/auto-196580/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 14:19:50.410451  849401 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-845765/.minikube/profiles/flannel-196580/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 14:19:50.523245  849401 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-845765/.minikube/profiles/auto-196580/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 14:19:51.165324  849401 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-845765/.minikube/profiles/auto-196580/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 14:19:52.448223  849401 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-845765/.minikube/profiles/auto-196580/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 14:19:52.971904  849401 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-845765/.minikube/profiles/flannel-196580/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 14:19:55.010139  849401 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-845765/.minikube/profiles/auto-196580/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 14:19:58.093563  849401 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-845765/.minikube/profiles/flannel-196580/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 14:20:00.131642  849401 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-845765/.minikube/profiles/auto-196580/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-815353 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.1: (49.164534448s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-815353 -n default-k8s-diff-port-815353
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (49.53s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.72s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-424269 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.72s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (10.83s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-424269 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-424269 --alsologtostderr -v=3: (10.82872306s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (10.83s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-545gn" [ccf1b842-a68e-4168-bae8-b084b51e8671] Running
E1013 14:20:08.334790  849401 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-845765/.minikube/profiles/flannel-196580/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 14:20:10.373012  849401 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-845765/.minikube/profiles/auto-196580/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003911038s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-545gn" [ccf1b842-a68e-4168-bae8-b084b51e8671] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.00454644s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-637171 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-424269 -n newest-cni-424269
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-424269 -n newest-cni-424269: exit status 7 (73.352037ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-424269 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (13.51s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-424269 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-424269 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.1: (13.174936105s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-424269 -n newest-cni-424269
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (13.51s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-637171 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2.49s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-637171 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-637171 -n no-preload-637171
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-637171 -n no-preload-637171: exit status 2 (323.012027ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-637171 -n no-preload-637171
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-637171 -n no-preload-637171: exit status 2 (324.203342ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-637171 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-637171 -n no-preload-637171
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-637171 -n no-preload-637171
--- PASS: TestStartStop/group/no-preload/serial/Pause (2.49s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-wdvzx" [79c10f54-042c-4d75-b155-e9a7fa2da06b] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.006154359s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-sz79w" [e9c65739-318d-4d18-b3a6-1d12bb079b07] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.005180446s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.13s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-wdvzx" [79c10f54-042c-4d75-b155-e9a7fa2da06b] Running
E1013 14:20:28.817337  849401 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-845765/.minikube/profiles/flannel-196580/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 14:20:28.960796  849401 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-845765/.minikube/profiles/enable-default-cni-196580/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004923016s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-900384 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.13s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-424269 image list --format=json
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.44s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-424269 --alsologtostderr -v=1
E1013 14:20:30.854724  849401 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-845765/.minikube/profiles/auto-196580/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-424269 -n newest-cni-424269
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-424269 -n newest-cni-424269: exit status 2 (321.576762ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-424269 -n newest-cni-424269
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-424269 -n newest-cni-424269: exit status 2 (314.273437ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-424269 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-424269 -n newest-cni-424269
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-424269 -n newest-cni-424269
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.44s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-900384 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.47s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-900384 --alsologtostderr -v=1
E1013 14:20:34.082439  849401 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-845765/.minikube/profiles/enable-default-cni-196580/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-900384 -n embed-certs-900384
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-900384 -n embed-certs-900384: exit status 2 (327.555877ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-900384 -n embed-certs-900384
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-900384 -n embed-certs-900384: exit status 2 (322.817557ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-900384 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-900384 -n embed-certs-900384
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-900384 -n embed-certs-900384
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.47s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-sz79w" [e9c65739-318d-4d18-b3a6-1d12bb079b07] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003731078s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-815353 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-815353 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.22s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.35s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-815353 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-815353 -n default-k8s-diff-port-815353
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-815353 -n default-k8s-diff-port-815353: exit status 2 (306.978282ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-815353 -n default-k8s-diff-port-815353
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-815353 -n default-k8s-diff-port-815353: exit status 2 (301.264269ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-815353 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-815353 -n default-k8s-diff-port-815353
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-815353 -n default-k8s-diff-port-815353
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.35s)

                                                
                                    

Test skip (22/347)

x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.1/kubectl (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:763: skipping GCPAuth addon test until 'Permission "artifactregistry.repositories.downloadArtifacts" denied on resource "projects/k8s-minikube/locations/us/repositories/test-artifacts" (or it may not exist)' issue is resolved
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:483: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker true linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:114: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:178: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:82: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (7.71s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:636: 
----------------------- debugLogs start: cilium-196580 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-196580

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-196580

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-196580

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-196580

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-196580

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-196580

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-196580

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-196580

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-196580

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-196580

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-196580" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-196580"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-196580" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-196580"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-196580" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-196580"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-196580

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-196580" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-196580"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-196580" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-196580"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-196580" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-196580" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-196580" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-196580" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-196580" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-196580" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-196580" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-196580" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-196580" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-196580"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-196580" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-196580"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-196580" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-196580"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-196580" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-196580"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-196580" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-196580"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-196580

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-196580

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-196580" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-196580" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-196580

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-196580

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-196580" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-196580" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-196580" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-196580" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-196580" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-196580" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-196580"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-196580" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-196580"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-196580" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-196580"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-196580" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-196580"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-196580" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-196580"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-196580

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-196580" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-196580"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-196580" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-196580"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-196580" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-196580"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-196580" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-196580"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-196580" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-196580"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-196580" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-196580"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-196580" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-196580"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-196580" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-196580"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-196580" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-196580"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-196580" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-196580"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-196580" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-196580"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-196580" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-196580"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-196580" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-196580"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-196580" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-196580"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-196580" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-196580"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-196580" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-196580"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-196580" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-196580"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-196580" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-196580"

                                                
                                                
----------------------- debugLogs end: cilium-196580 [took: 7.469052143s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-196580" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-196580
--- SKIP: TestNetworkPlugins/group/cilium (7.71s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-073301" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-073301
--- SKIP: TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                    
Copied to clipboard