Test Report: Docker_Linux_crio_arm64 16569

                    
                      852d9197a19e9ebea28af4d23e9565040e130819:2023-05-31:29511
                    
                

Test fail (9/296)

Order failed test Duration
24 TestAddons/parallel/Registry 180.22
25 TestAddons/parallel/Ingress 168.15
152 TestIngressAddonLegacy/serial/ValidateIngressAddons 175.5
202 TestMultiNode/serial/PingHostFrom2Pods 4
217 TestPreload 172.17
223 TestRunningBinaryUpgrade 68.67
226 TestMissingContainerUpgrade 90.67
238 TestStoppedBinaryUpgrade/Upgrade 164.44
249 TestPause/serial/SecondStartNoReconfiguration 53.63
x
+
TestAddons/parallel/Registry (180.22s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:306: registry stabilized in 59.694997ms
addons_test.go:308: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-6hcmh" [98c2e1ee-6d1b-4140-a410-92c62d5b0c8e] Running
addons_test.go:308: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.01166793s
addons_test.go:311: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-c7bxw" [6510a1f0-5ba2-49e1-8749-6a1b8101c599] Running / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
addons_test.go:311: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.013135285s
addons_test.go:316: (dbg) Run:  kubectl --context addons-748280 delete po -l run=registry-test --now
addons_test.go:321: (dbg) Run:  kubectl --context addons-748280 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:321: (dbg) Done: kubectl --context addons-748280 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.597588779s)
addons_test.go:335: (dbg) Run:  out/minikube-linux-arm64 -p addons-748280 ip
2023/05/31 18:47:20 [DEBUG] GET http://192.168.49.2:5000
2023/05/31 18:47:20 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2023/05/31 18:47:20 [DEBUG] GET http://192.168.49.2:5000: retrying in 1s (4 left)
2023/05/31 18:47:21 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2023/05/31 18:47:21 [DEBUG] GET http://192.168.49.2:5000: retrying in 2s (3 left)
addons_test.go:361: failed to check external access to http://192.168.49.2:5000: GET http://192.168.49.2:5000 giving up after 5 attempt(s): Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
addons_test.go:364: (dbg) Run:  out/minikube-linux-arm64 -p addons-748280 addons disable registry --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/Registry]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-748280
helpers_test.go:235: (dbg) docker inspect addons-748280:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "7a58041a0a26da39225fc1484cf566703381ab580d380a421227d99124b51cb9",
	        "Created": "2023-05-31T18:44:57.223926968Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 8785,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-05-31T18:44:57.591241228Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:dee4774de4f99268e16c76379a36a4607bed47635a069a7e60c17cd24d9aaa76",
	        "ResolvConfPath": "/var/lib/docker/containers/7a58041a0a26da39225fc1484cf566703381ab580d380a421227d99124b51cb9/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/7a58041a0a26da39225fc1484cf566703381ab580d380a421227d99124b51cb9/hostname",
	        "HostsPath": "/var/lib/docker/containers/7a58041a0a26da39225fc1484cf566703381ab580d380a421227d99124b51cb9/hosts",
	        "LogPath": "/var/lib/docker/containers/7a58041a0a26da39225fc1484cf566703381ab580d380a421227d99124b51cb9/7a58041a0a26da39225fc1484cf566703381ab580d380a421227d99124b51cb9-json.log",
	        "Name": "/addons-748280",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-748280:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-748280",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/c904836411e908252a69d24d682ce2db18fd81888b93f4c037d25dd6cf4c1d34-init/diff:/var/lib/docker/overlay2/548bced7e749d102323bab71db162b075785f916e2a896d29f3adc2c3d7fbea8/diff",
	                "MergedDir": "/var/lib/docker/overlay2/c904836411e908252a69d24d682ce2db18fd81888b93f4c037d25dd6cf4c1d34/merged",
	                "UpperDir": "/var/lib/docker/overlay2/c904836411e908252a69d24d682ce2db18fd81888b93f4c037d25dd6cf4c1d34/diff",
	                "WorkDir": "/var/lib/docker/overlay2/c904836411e908252a69d24d682ce2db18fd81888b93f4c037d25dd6cf4c1d34/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-748280",
	                "Source": "/var/lib/docker/volumes/addons-748280/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-748280",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-748280",
	                "name.minikube.sigs.k8s.io": "addons-748280",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "a3812706bf440a07b0ac0dc7b60a1480023e8c8842b2635e8ad06f1df8b39603",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32772"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32771"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32768"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32770"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32769"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/a3812706bf44",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-748280": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "7a58041a0a26",
	                        "addons-748280"
	                    ],
	                    "NetworkID": "760d9ac68c2919cc41692d416f20a39b5774ce399f6df40f2bb0801afd196ee3",
	                    "EndpointID": "d7565ee75e21f65b3313c923380dd3d07fcb9d6db2fb265fe468a0fa8884e40d",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-748280 -n addons-748280
helpers_test.go:244: <<< TestAddons/parallel/Registry FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Registry]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p addons-748280 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p addons-748280 logs -n 25: (1.924406957s)
helpers_test.go:252: TestAddons/parallel/Registry logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-924367   | jenkins | v1.30.1 | 31 May 23 18:44 UTC |                     |
	|         | -p download-only-924367        |                        |         |         |                     |                     |
	|         | --force --alsologtostderr      |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0   |                        |         |         |                     |                     |
	|         | --container-runtime=crio       |                        |         |         |                     |                     |
	|         | --driver=docker                |                        |         |         |                     |                     |
	|         | --container-runtime=crio       |                        |         |         |                     |                     |
	| start   | -o=json --download-only        | download-only-924367   | jenkins | v1.30.1 | 31 May 23 18:44 UTC |                     |
	|         | -p download-only-924367        |                        |         |         |                     |                     |
	|         | --force --alsologtostderr      |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.27.2   |                        |         |         |                     |                     |
	|         | --container-runtime=crio       |                        |         |         |                     |                     |
	|         | --driver=docker                |                        |         |         |                     |                     |
	|         | --container-runtime=crio       |                        |         |         |                     |                     |
	| delete  | --all                          | minikube               | jenkins | v1.30.1 | 31 May 23 18:44 UTC | 31 May 23 18:44 UTC |
	| delete  | -p download-only-924367        | download-only-924367   | jenkins | v1.30.1 | 31 May 23 18:44 UTC | 31 May 23 18:44 UTC |
	| delete  | -p download-only-924367        | download-only-924367   | jenkins | v1.30.1 | 31 May 23 18:44 UTC | 31 May 23 18:44 UTC |
	| start   | --download-only -p             | download-docker-298073 | jenkins | v1.30.1 | 31 May 23 18:44 UTC |                     |
	|         | download-docker-298073         |                        |         |         |                     |                     |
	|         | --alsologtostderr              |                        |         |         |                     |                     |
	|         | --driver=docker                |                        |         |         |                     |                     |
	|         | --container-runtime=crio       |                        |         |         |                     |                     |
	| delete  | -p download-docker-298073      | download-docker-298073 | jenkins | v1.30.1 | 31 May 23 18:44 UTC | 31 May 23 18:44 UTC |
	| start   | --download-only -p             | binary-mirror-781489   | jenkins | v1.30.1 | 31 May 23 18:44 UTC |                     |
	|         | binary-mirror-781489           |                        |         |         |                     |                     |
	|         | --alsologtostderr              |                        |         |         |                     |                     |
	|         | --binary-mirror                |                        |         |         |                     |                     |
	|         | http://127.0.0.1:45143         |                        |         |         |                     |                     |
	|         | --driver=docker                |                        |         |         |                     |                     |
	|         | --container-runtime=crio       |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-781489        | binary-mirror-781489   | jenkins | v1.30.1 | 31 May 23 18:44 UTC | 31 May 23 18:44 UTC |
	| start   | -p addons-748280               | addons-748280          | jenkins | v1.30.1 | 31 May 23 18:44 UTC | 31 May 23 18:47 UTC |
	|         | --wait=true --memory=4000      |                        |         |         |                     |                     |
	|         | --alsologtostderr              |                        |         |         |                     |                     |
	|         | --addons=registry              |                        |         |         |                     |                     |
	|         | --addons=metrics-server        |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots       |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver   |                        |         |         |                     |                     |
	|         | --addons=gcp-auth              |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner         |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget      |                        |         |         |                     |                     |
	|         | --driver=docker                |                        |         |         |                     |                     |
	|         | --container-runtime=crio       |                        |         |         |                     |                     |
	|         | --addons=ingress               |                        |         |         |                     |                     |
	|         | --addons=ingress-dns           |                        |         |         |                     |                     |
	| addons  | disable cloud-spanner -p       | addons-748280          | jenkins | v1.30.1 | 31 May 23 18:47 UTC | 31 May 23 18:47 UTC |
	|         | addons-748280                  |                        |         |         |                     |                     |
	| addons  | addons-748280 addons           | addons-748280          | jenkins | v1.30.1 | 31 May 23 18:47 UTC | 31 May 23 18:47 UTC |
	|         | disable metrics-server         |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1         |                        |         |         |                     |                     |
	| ip      | addons-748280 ip               | addons-748280          | jenkins | v1.30.1 | 31 May 23 18:47 UTC | 31 May 23 18:47 UTC |
	| addons  | disable inspektor-gadget -p    | addons-748280          | jenkins | v1.30.1 | 31 May 23 18:47 UTC | 31 May 23 18:47 UTC |
	|         | addons-748280                  |                        |         |         |                     |                     |
	| ssh     | addons-748280 ssh curl -s      | addons-748280          | jenkins | v1.30.1 | 31 May 23 18:47 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:    |                        |         |         |                     |                     |
	|         | nginx.example.com'             |                        |         |         |                     |                     |
	| ip      | addons-748280 ip               | addons-748280          | jenkins | v1.30.1 | 31 May 23 18:49 UTC | 31 May 23 18:49 UTC |
	| addons  | addons-748280 addons disable   | addons-748280          | jenkins | v1.30.1 | 31 May 23 18:50 UTC | 31 May 23 18:50 UTC |
	|         | registry --alsologtostderr     |                        |         |         |                     |                     |
	|         | -v=1                           |                        |         |         |                     |                     |
	|---------|--------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/05/31 18:44:34
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.20.4 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0531 18:44:34.199212    8307 out.go:296] Setting OutFile to fd 1 ...
	I0531 18:44:34.199335    8307 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0531 18:44:34.199345    8307 out.go:309] Setting ErrFile to fd 2...
	I0531 18:44:34.199350    8307 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0531 18:44:34.199520    8307 root.go:336] Updating PATH: /home/jenkins/minikube-integration/16569-2389/.minikube/bin
	I0531 18:44:34.199963    8307 out.go:303] Setting JSON to false
	I0531 18:44:34.200663    8307 start.go:127] hostinfo: {"hostname":"ip-172-31-21-244","uptime":1620,"bootTime":1685557055,"procs":146,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1037-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0531 18:44:34.200726    8307 start.go:137] virtualization:  
	I0531 18:44:34.203125    8307 out.go:177] * [addons-748280] minikube v1.30.1 on Ubuntu 20.04 (arm64)
	I0531 18:44:34.205045    8307 out.go:177]   - MINIKUBE_LOCATION=16569
	I0531 18:44:34.206660    8307 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0531 18:44:34.205135    8307 notify.go:220] Checking for updates...
	I0531 18:44:34.210461    8307 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/16569-2389/kubeconfig
	I0531 18:44:34.212215    8307 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/16569-2389/.minikube
	I0531 18:44:34.214212    8307 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0531 18:44:34.216012    8307 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0531 18:44:34.218055    8307 driver.go:375] Setting default libvirt URI to qemu:///system
	I0531 18:44:34.243399    8307 docker.go:121] docker version: linux-24.0.2:Docker Engine - Community
	I0531 18:44:34.243526    8307 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0531 18:44:34.330366    8307 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:38 SystemTime:2023-05-31 18:44:34.320213482 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1037-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215166976 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.5] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.18.1]] Warnings:<nil>}}
	I0531 18:44:34.330472    8307 docker.go:294] overlay module found
	I0531 18:44:34.333786    8307 out.go:177] * Using the docker driver based on user configuration
	I0531 18:44:34.335419    8307 start.go:297] selected driver: docker
	I0531 18:44:34.335438    8307 start.go:875] validating driver "docker" against <nil>
	I0531 18:44:34.335452    8307 start.go:886] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0531 18:44:34.336111    8307 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0531 18:44:34.396640    8307 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:38 SystemTime:2023-05-31 18:44:34.387230156 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1037-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215166976 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.5] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.18.1]] Warnings:<nil>}}
	I0531 18:44:34.396789    8307 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0531 18:44:34.397011    8307 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0531 18:44:34.398860    8307 out.go:177] * Using Docker driver with root privileges
	I0531 18:44:34.400901    8307 cni.go:84] Creating CNI manager for ""
	I0531 18:44:34.400926    8307 cni.go:142] "docker" driver + "crio" runtime found, recommending kindnet
	I0531 18:44:34.400936    8307 start_flags.go:314] Found "CNI" CNI - setting NetworkPlugin=cni
	I0531 18:44:34.400953    8307 start_flags.go:319] config:
	{Name:addons-748280 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:addons-748280 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRI
Socket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0531 18:44:34.403668    8307 out.go:177] * Starting control plane node addons-748280 in cluster addons-748280
	I0531 18:44:34.405618    8307 cache.go:122] Beginning downloading kic base image for docker with crio
	I0531 18:44:34.407574    8307 out.go:177] * Pulling base image ...
	I0531 18:44:34.409468    8307 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 in local docker daemon
	I0531 18:44:34.409518    8307 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime crio
	I0531 18:44:34.409559    8307 preload.go:148] Found local preload: /home/jenkins/minikube-integration/16569-2389/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-cri-o-overlay-arm64.tar.lz4
	I0531 18:44:34.409568    8307 cache.go:57] Caching tarball of preloaded images
	I0531 18:44:34.409628    8307 preload.go:174] Found /home/jenkins/minikube-integration/16569-2389/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0531 18:44:34.409638    8307 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.2 on crio
	I0531 18:44:34.409974    8307 profile.go:148] Saving config to /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/addons-748280/config.json ...
	I0531 18:44:34.409994    8307 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/addons-748280/config.json: {Name:mka7b556e1d2f2dbe052c145af41fb940259c005 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 18:44:34.426908    8307 cache.go:150] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 to local cache
	I0531 18:44:34.427017    8307 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 in local cache directory
	I0531 18:44:34.427043    8307 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 in local cache directory, skipping pull
	I0531 18:44:34.427052    8307 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 exists in cache, skipping pull
	I0531 18:44:34.427059    8307 cache.go:153] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 as a tarball
	I0531 18:44:34.427069    8307 cache.go:163] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 from local cache
	I0531 18:44:49.656212    8307 cache.go:165] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 from cached tarball
	I0531 18:44:49.656251    8307 cache.go:195] Successfully downloaded all kic artifacts
	I0531 18:44:49.656301    8307 start.go:364] acquiring machines lock for addons-748280: {Name:mkb49c926704a8994ccf8fa9f553fc7de82d6161 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0531 18:44:49.656425    8307 start.go:368] acquired machines lock for "addons-748280" in 100.659µs
	I0531 18:44:49.656458    8307 start.go:93] Provisioning new machine with config: &{Name:addons-748280 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:addons-748280 Namespace:default APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disabl
eMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0531 18:44:49.656539    8307 start.go:125] createHost starting for "" (driver="docker")
	I0531 18:44:49.658884    8307 out.go:204] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0531 18:44:49.659117    8307 start.go:159] libmachine.API.Create for "addons-748280" (driver="docker")
	I0531 18:44:49.659152    8307 client.go:168] LocalClient.Create starting
	I0531 18:44:49.659279    8307 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/16569-2389/.minikube/certs/ca.pem
	I0531 18:44:50.198993    8307 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/16569-2389/.minikube/certs/cert.pem
	I0531 18:44:50.547448    8307 cli_runner.go:164] Run: docker network inspect addons-748280 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0531 18:44:50.569103    8307 cli_runner.go:211] docker network inspect addons-748280 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0531 18:44:50.569207    8307 network_create.go:281] running [docker network inspect addons-748280] to gather additional debugging logs...
	I0531 18:44:50.569228    8307 cli_runner.go:164] Run: docker network inspect addons-748280
	W0531 18:44:50.592054    8307 cli_runner.go:211] docker network inspect addons-748280 returned with exit code 1
	I0531 18:44:50.592090    8307 network_create.go:284] error running [docker network inspect addons-748280]: docker network inspect addons-748280: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-748280 not found
	I0531 18:44:50.592102    8307 network_create.go:286] output of [docker network inspect addons-748280]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-748280 not found
	
	** /stderr **
	I0531 18:44:50.592176    8307 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0531 18:44:50.610931    8307 network.go:209] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4000b9ea50}
	I0531 18:44:50.610979    8307 network_create.go:123] attempt to create docker network addons-748280 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0531 18:44:50.611036    8307 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-748280 addons-748280
	I0531 18:44:50.683505    8307 network_create.go:107] docker network addons-748280 192.168.49.0/24 created
	I0531 18:44:50.683538    8307 kic.go:117] calculated static IP "192.168.49.2" for the "addons-748280" container
	I0531 18:44:50.683636    8307 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0531 18:44:50.701348    8307 cli_runner.go:164] Run: docker volume create addons-748280 --label name.minikube.sigs.k8s.io=addons-748280 --label created_by.minikube.sigs.k8s.io=true
	I0531 18:44:50.723389    8307 oci.go:103] Successfully created a docker volume addons-748280
	I0531 18:44:50.723482    8307 cli_runner.go:164] Run: docker run --rm --name addons-748280-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-748280 --entrypoint /usr/bin/test -v addons-748280:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 -d /var/lib
	I0531 18:44:52.986241    8307 cli_runner.go:217] Completed: docker run --rm --name addons-748280-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-748280 --entrypoint /usr/bin/test -v addons-748280:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 -d /var/lib: (2.262716848s)
	I0531 18:44:52.986275    8307 oci.go:107] Successfully prepared a docker volume addons-748280
	I0531 18:44:52.986300    8307 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime crio
	I0531 18:44:52.986318    8307 kic.go:190] Starting extracting preloaded images to volume ...
	I0531 18:44:52.986398    8307 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/16569-2389/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-748280:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 -I lz4 -xf /preloaded.tar -C /extractDir
	I0531 18:44:57.140932    8307 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/16569-2389/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-748280:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 -I lz4 -xf /preloaded.tar -C /extractDir: (4.154496534s)
	I0531 18:44:57.140964    8307 kic.go:199] duration metric: took 4.154642 seconds to extract preloaded images to volume
	W0531 18:44:57.141129    8307 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0531 18:44:57.141253    8307 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0531 18:44:57.207736    8307 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-748280 --name addons-748280 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-748280 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-748280 --network addons-748280 --ip 192.168.49.2 --volume addons-748280:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8
	I0531 18:44:57.600385    8307 cli_runner.go:164] Run: docker container inspect addons-748280 --format={{.State.Running}}
	I0531 18:44:57.630120    8307 cli_runner.go:164] Run: docker container inspect addons-748280 --format={{.State.Status}}
	I0531 18:44:57.665080    8307 cli_runner.go:164] Run: docker exec addons-748280 stat /var/lib/dpkg/alternatives/iptables
	I0531 18:44:57.754869    8307 oci.go:144] the created container "addons-748280" has a running status.
	I0531 18:44:57.754894    8307 kic.go:221] Creating ssh key for kic: /home/jenkins/minikube-integration/16569-2389/.minikube/machines/addons-748280/id_rsa...
	I0531 18:44:58.437004    8307 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/16569-2389/.minikube/machines/addons-748280/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0531 18:44:58.479968    8307 cli_runner.go:164] Run: docker container inspect addons-748280 --format={{.State.Status}}
	I0531 18:44:58.512482    8307 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0531 18:44:58.512501    8307 kic_runner.go:114] Args: [docker exec --privileged addons-748280 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0531 18:44:58.611991    8307 cli_runner.go:164] Run: docker container inspect addons-748280 --format={{.State.Status}}
	I0531 18:44:58.637094    8307 machine.go:88] provisioning docker machine ...
	I0531 18:44:58.637125    8307 ubuntu.go:169] provisioning hostname "addons-748280"
	I0531 18:44:58.637191    8307 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-748280
	I0531 18:44:58.663927    8307 main.go:141] libmachine: Using SSH client type: native
	I0531 18:44:58.664402    8307 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x39eaa0] 0x3a1430 <nil>  [] 0s} 127.0.0.1 32772 <nil> <nil>}
	I0531 18:44:58.664423    8307 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-748280 && echo "addons-748280" | sudo tee /etc/hostname
	I0531 18:44:58.820805    8307 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-748280
	
	I0531 18:44:58.820964    8307 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-748280
	I0531 18:44:58.843803    8307 main.go:141] libmachine: Using SSH client type: native
	I0531 18:44:58.844231    8307 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x39eaa0] 0x3a1430 <nil>  [] 0s} 127.0.0.1 32772 <nil> <nil>}
	I0531 18:44:58.844248    8307 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-748280' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-748280/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-748280' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0531 18:44:58.979966    8307 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0531 18:44:58.979994    8307 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/16569-2389/.minikube CaCertPath:/home/jenkins/minikube-integration/16569-2389/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16569-2389/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16569-2389/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16569-2389/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16569-2389/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16569-2389/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16569-2389/.minikube}
	I0531 18:44:58.980012    8307 ubuntu.go:177] setting up certificates
	I0531 18:44:58.980021    8307 provision.go:83] configureAuth start
	I0531 18:44:58.980087    8307 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-748280
	I0531 18:44:58.998497    8307 provision.go:138] copyHostCerts
	I0531 18:44:58.998601    8307 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16569-2389/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16569-2389/.minikube/ca.pem (1078 bytes)
	I0531 18:44:58.998763    8307 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16569-2389/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16569-2389/.minikube/cert.pem (1123 bytes)
	I0531 18:44:58.998850    8307 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16569-2389/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16569-2389/.minikube/key.pem (1679 bytes)
	I0531 18:44:58.998915    8307 provision.go:112] generating server cert: /home/jenkins/minikube-integration/16569-2389/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16569-2389/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16569-2389/.minikube/certs/ca-key.pem org=jenkins.addons-748280 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube addons-748280]
	I0531 18:44:59.525131    8307 provision.go:172] copyRemoteCerts
	I0531 18:44:59.525203    8307 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0531 18:44:59.525250    8307 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-748280
	I0531 18:44:59.544897    8307 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/16569-2389/.minikube/machines/addons-748280/id_rsa Username:docker}
	I0531 18:44:59.641671    8307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16569-2389/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0531 18:44:59.670850    8307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16569-2389/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0531 18:44:59.700812    8307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16569-2389/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0531 18:44:59.729661    8307 provision.go:86] duration metric: configureAuth took 749.626443ms
	I0531 18:44:59.729776    8307 ubuntu.go:193] setting minikube options for container-runtime
	I0531 18:44:59.729976    8307 config.go:182] Loaded profile config "addons-748280": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.2
	I0531 18:44:59.730079    8307 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-748280
	I0531 18:44:59.748685    8307 main.go:141] libmachine: Using SSH client type: native
	I0531 18:44:59.749114    8307 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x39eaa0] 0x3a1430 <nil>  [] 0s} 127.0.0.1 32772 <nil> <nil>}
	I0531 18:44:59.749136    8307 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0531 18:45:00.020308    8307 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0531 18:45:00.020391    8307 machine.go:91] provisioned docker machine in 1.383275276s
	I0531 18:45:00.020418    8307 client.go:171] LocalClient.Create took 10.361256105s
	I0531 18:45:00.020489    8307 start.go:167] duration metric: libmachine.API.Create for "addons-748280" took 10.361330976s
	I0531 18:45:00.020541    8307 start.go:300] post-start starting for "addons-748280" (driver="docker")
	I0531 18:45:00.020576    8307 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0531 18:45:00.020729    8307 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0531 18:45:00.020817    8307 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-748280
	I0531 18:45:00.081030    8307 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/16569-2389/.minikube/machines/addons-748280/id_rsa Username:docker}
	I0531 18:45:00.202521    8307 ssh_runner.go:195] Run: cat /etc/os-release
	I0531 18:45:00.208046    8307 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0531 18:45:00.208087    8307 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0531 18:45:00.208100    8307 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0531 18:45:00.208106    8307 info.go:137] Remote host: Ubuntu 22.04.2 LTS
	I0531 18:45:00.208117    8307 filesync.go:126] Scanning /home/jenkins/minikube-integration/16569-2389/.minikube/addons for local assets ...
	I0531 18:45:00.208202    8307 filesync.go:126] Scanning /home/jenkins/minikube-integration/16569-2389/.minikube/files for local assets ...
	I0531 18:45:00.208246    8307 start.go:303] post-start completed in 187.671698ms
	I0531 18:45:00.208591    8307 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-748280
	I0531 18:45:00.230526    8307 profile.go:148] Saving config to /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/addons-748280/config.json ...
	I0531 18:45:00.230917    8307 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0531 18:45:00.230981    8307 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-748280
	I0531 18:45:00.252554    8307 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/16569-2389/.minikube/machines/addons-748280/id_rsa Username:docker}
	I0531 18:45:00.345419    8307 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0531 18:45:00.351940    8307 start.go:128] duration metric: createHost completed in 10.695386174s
	I0531 18:45:00.351966    8307 start.go:83] releasing machines lock for "addons-748280", held for 10.695527186s
	I0531 18:45:00.352041    8307 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-748280
	I0531 18:45:00.372335    8307 ssh_runner.go:195] Run: cat /version.json
	I0531 18:45:00.372354    8307 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0531 18:45:00.372400    8307 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-748280
	I0531 18:45:00.372427    8307 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-748280
	I0531 18:45:00.405393    8307 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/16569-2389/.minikube/machines/addons-748280/id_rsa Username:docker}
	I0531 18:45:00.406393    8307 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/16569-2389/.minikube/machines/addons-748280/id_rsa Username:docker}
	I0531 18:45:00.500117    8307 ssh_runner.go:195] Run: systemctl --version
	I0531 18:45:00.640250    8307 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0531 18:45:00.787841    8307 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0531 18:45:00.793315    8307 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0531 18:45:00.819180    8307 cni.go:220] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0531 18:45:00.819261    8307 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0531 18:45:00.865192    8307 cni.go:261] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0531 18:45:00.865251    8307 start.go:481] detecting cgroup driver to use...
	I0531 18:45:00.865297    8307 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0531 18:45:00.865370    8307 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0531 18:45:00.885925    8307 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0531 18:45:00.900754    8307 docker.go:193] disabling cri-docker service (if available) ...
	I0531 18:45:00.900842    8307 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0531 18:45:00.917150    8307 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0531 18:45:00.935795    8307 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0531 18:45:01.037224    8307 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0531 18:45:01.147629    8307 docker.go:209] disabling docker service ...
	I0531 18:45:01.147704    8307 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0531 18:45:01.169926    8307 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0531 18:45:01.184167    8307 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0531 18:45:01.290326    8307 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0531 18:45:01.402641    8307 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0531 18:45:01.416289    8307 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0531 18:45:01.436394    8307 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0531 18:45:01.436510    8307 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0531 18:45:01.449935    8307 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0531 18:45:01.450005    8307 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0531 18:45:01.462473    8307 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0531 18:45:01.474607    8307 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0531 18:45:01.486889    8307 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0531 18:45:01.498654    8307 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0531 18:45:01.509731    8307 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0531 18:45:01.520743    8307 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0531 18:45:01.608658    8307 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0531 18:45:01.730095    8307 start.go:528] Will wait 60s for socket path /var/run/crio/crio.sock
	I0531 18:45:01.730173    8307 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0531 18:45:01.735016    8307 start.go:549] Will wait 60s for crictl version
	I0531 18:45:01.735079    8307 ssh_runner.go:195] Run: which crictl
	I0531 18:45:01.739776    8307 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0531 18:45:01.782322    8307 start.go:565] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.5
	RuntimeApiVersion:  v1
	I0531 18:45:01.782510    8307 ssh_runner.go:195] Run: crio --version
	I0531 18:45:01.826461    8307 ssh_runner.go:195] Run: crio --version
	I0531 18:45:01.871612    8307 out.go:177] * Preparing Kubernetes v1.27.2 on CRI-O 1.24.5 ...
	I0531 18:45:01.873172    8307 cli_runner.go:164] Run: docker network inspect addons-748280 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0531 18:45:01.891312    8307 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0531 18:45:01.896148    8307 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0531 18:45:01.910311    8307 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime crio
	I0531 18:45:01.910384    8307 ssh_runner.go:195] Run: sudo crictl images --output json
	I0531 18:45:01.980188    8307 crio.go:496] all images are preloaded for cri-o runtime.
	I0531 18:45:01.980219    8307 crio.go:415] Images already preloaded, skipping extraction
	I0531 18:45:01.980281    8307 ssh_runner.go:195] Run: sudo crictl images --output json
	I0531 18:45:02.028255    8307 crio.go:496] all images are preloaded for cri-o runtime.
	I0531 18:45:02.028278    8307 cache_images.go:84] Images are preloaded, skipping loading
	I0531 18:45:02.028355    8307 ssh_runner.go:195] Run: crio config
	I0531 18:45:02.088423    8307 cni.go:84] Creating CNI manager for ""
	I0531 18:45:02.088489    8307 cni.go:142] "docker" driver + "crio" runtime found, recommending kindnet
	I0531 18:45:02.088510    8307 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0531 18:45:02.088531    8307 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.27.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-748280 NodeName:addons-748280 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0531 18:45:02.088679    8307 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-748280"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.27.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0531 18:45:02.088778    8307 kubeadm.go:971] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.27.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=addons-748280 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.27.2 ClusterName:addons-748280 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0531 18:45:02.088850    8307 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.27.2
	I0531 18:45:02.100181    8307 binaries.go:44] Found k8s binaries, skipping transfer
	I0531 18:45:02.100327    8307 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0531 18:45:02.111391    8307 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (423 bytes)
	I0531 18:45:02.134464    8307 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0531 18:45:02.158425    8307 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2094 bytes)
	I0531 18:45:02.181347    8307 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0531 18:45:02.186988    8307 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0531 18:45:02.200937    8307 certs.go:56] Setting up /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/addons-748280 for IP: 192.168.49.2
	I0531 18:45:02.200969    8307 certs.go:190] acquiring lock for shared ca certs: {Name:mk0147accf8b8da231d39646bdc89fced67451cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 18:45:02.201099    8307 certs.go:204] generating minikubeCA CA: /home/jenkins/minikube-integration/16569-2389/.minikube/ca.key
	I0531 18:45:02.559806    8307 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/16569-2389/.minikube/ca.crt ...
	I0531 18:45:02.559836    8307 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16569-2389/.minikube/ca.crt: {Name:mk1ba87ff99ad095694275f285b29b67f66bdcd2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 18:45:02.560017    8307 crypto.go:164] Writing key to /home/jenkins/minikube-integration/16569-2389/.minikube/ca.key ...
	I0531 18:45:02.560029    8307 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16569-2389/.minikube/ca.key: {Name:mk89234849bfd4ebf31d5cca0486baba56b6f968 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 18:45:02.560115    8307 certs.go:204] generating proxyClientCA CA: /home/jenkins/minikube-integration/16569-2389/.minikube/proxy-client-ca.key
	I0531 18:45:03.034284    8307 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/16569-2389/.minikube/proxy-client-ca.crt ...
	I0531 18:45:03.034315    8307 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16569-2389/.minikube/proxy-client-ca.crt: {Name:mk6ea7cc75db9fa0483654cf8f122fd3b0e3609c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 18:45:03.034500    8307 crypto.go:164] Writing key to /home/jenkins/minikube-integration/16569-2389/.minikube/proxy-client-ca.key ...
	I0531 18:45:03.034512    8307 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16569-2389/.minikube/proxy-client-ca.key: {Name:mkf918c2ce14783aab516b848bd9c9e74db86d4d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 18:45:03.034635    8307 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/addons-748280/client.key
	I0531 18:45:03.034652    8307 crypto.go:68] Generating cert /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/addons-748280/client.crt with IP's: []
	I0531 18:45:03.595796    8307 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/addons-748280/client.crt ...
	I0531 18:45:03.595826    8307 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/addons-748280/client.crt: {Name:mkba432b72effdb186ae16d5dfa242c36c5ccf2d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 18:45:03.596020    8307 crypto.go:164] Writing key to /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/addons-748280/client.key ...
	I0531 18:45:03.596033    8307 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/addons-748280/client.key: {Name:mkbb3264ea9ca332563dc8e996b5eaa1af5da2c8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 18:45:03.596120    8307 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/addons-748280/apiserver.key.dd3b5fb2
	I0531 18:45:03.596138    8307 crypto.go:68] Generating cert /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/addons-748280/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0531 18:45:04.086595    8307 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/addons-748280/apiserver.crt.dd3b5fb2 ...
	I0531 18:45:04.086628    8307 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/addons-748280/apiserver.crt.dd3b5fb2: {Name:mk92505edbaa43f853457c596ae242259dfc280e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 18:45:04.086844    8307 crypto.go:164] Writing key to /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/addons-748280/apiserver.key.dd3b5fb2 ...
	I0531 18:45:04.086860    8307 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/addons-748280/apiserver.key.dd3b5fb2: {Name:mk13060a044e0f51fbe1670d26b3d49304e52c4f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 18:45:04.086947    8307 certs.go:337] copying /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/addons-748280/apiserver.crt.dd3b5fb2 -> /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/addons-748280/apiserver.crt
	I0531 18:45:04.087018    8307 certs.go:341] copying /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/addons-748280/apiserver.key.dd3b5fb2 -> /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/addons-748280/apiserver.key
	I0531 18:45:04.087069    8307 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/addons-748280/proxy-client.key
	I0531 18:45:04.087088    8307 crypto.go:68] Generating cert /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/addons-748280/proxy-client.crt with IP's: []
	I0531 18:45:04.855975    8307 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/addons-748280/proxy-client.crt ...
	I0531 18:45:04.856005    8307 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/addons-748280/proxy-client.crt: {Name:mk779e87f69baa06f87ee439c4e4bba857c5ab50 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 18:45:04.856197    8307 crypto.go:164] Writing key to /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/addons-748280/proxy-client.key ...
	I0531 18:45:04.856210    8307 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/addons-748280/proxy-client.key: {Name:mkd3f8fd1539e560140df2154fd5479bb0686a7f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 18:45:04.856396    8307 certs.go:437] found cert: /home/jenkins/minikube-integration/16569-2389/.minikube/certs/home/jenkins/minikube-integration/16569-2389/.minikube/certs/ca-key.pem (1675 bytes)
	I0531 18:45:04.856438    8307 certs.go:437] found cert: /home/jenkins/minikube-integration/16569-2389/.minikube/certs/home/jenkins/minikube-integration/16569-2389/.minikube/certs/ca.pem (1078 bytes)
	I0531 18:45:04.856463    8307 certs.go:437] found cert: /home/jenkins/minikube-integration/16569-2389/.minikube/certs/home/jenkins/minikube-integration/16569-2389/.minikube/certs/cert.pem (1123 bytes)
	I0531 18:45:04.856493    8307 certs.go:437] found cert: /home/jenkins/minikube-integration/16569-2389/.minikube/certs/home/jenkins/minikube-integration/16569-2389/.minikube/certs/key.pem (1679 bytes)
	I0531 18:45:04.857067    8307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/addons-748280/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0531 18:45:04.885717    8307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/addons-748280/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0531 18:45:04.914158    8307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/addons-748280/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0531 18:45:04.943134    8307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/addons-748280/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0531 18:45:04.972275    8307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16569-2389/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0531 18:45:05.003205    8307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16569-2389/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0531 18:45:05.034089    8307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16569-2389/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0531 18:45:05.062298    8307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16569-2389/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0531 18:45:05.092000    8307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16569-2389/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0531 18:45:05.120889    8307 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0531 18:45:05.142302    8307 ssh_runner.go:195] Run: openssl version
	I0531 18:45:05.150342    8307 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0531 18:45:05.162861    8307 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0531 18:45:05.167636    8307 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 May 31 18:45 /usr/share/ca-certificates/minikubeCA.pem
	I0531 18:45:05.167744    8307 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0531 18:45:05.176462    8307 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0531 18:45:05.187975    8307 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0531 18:45:05.192402    8307 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0531 18:45:05.192457    8307 kubeadm.go:404] StartCluster: {Name:addons-748280 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:addons-748280 Namespace:default APIServerName:minikubeCA APIServerNames:[] APISe
rverIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:f
alse CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0531 18:45:05.192538    8307 cri.go:53] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0531 18:45:05.192593    8307 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0531 18:45:05.234708    8307 cri.go:88] found id: ""
	I0531 18:45:05.234799    8307 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0531 18:45:05.245430    8307 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0531 18:45:05.256262    8307 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0531 18:45:05.256325    8307 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0531 18:45:05.267171    8307 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0531 18:45:05.267216    8307 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0531 18:45:05.371934    8307 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1037-aws\n", err: exit status 1
	I0531 18:45:05.457543    8307 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0531 18:45:05.457753    8307 kubeadm.go:322] W0531 18:45:05.457018    1052 images.go:80] could not find officially supported version of etcd for Kubernetes v1.27.2, falling back to the nearest etcd version (3.5.7-0)
	I0531 18:45:12.746265    8307 kubeadm.go:322] W0531 18:45:12.745910    1052 images.go:80] could not find officially supported version of etcd for Kubernetes v1.27.2, falling back to the nearest etcd version (3.5.7-0)
	I0531 18:45:21.741713    8307 kubeadm.go:322] [init] Using Kubernetes version: v1.27.2
	I0531 18:45:21.741776    8307 kubeadm.go:322] [preflight] Running pre-flight checks
	I0531 18:45:21.741864    8307 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I0531 18:45:21.741930    8307 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1037-aws
	I0531 18:45:21.741968    8307 kubeadm.go:322] OS: Linux
	I0531 18:45:21.742054    8307 kubeadm.go:322] CGROUPS_CPU: enabled
	I0531 18:45:21.742135    8307 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I0531 18:45:21.742227    8307 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I0531 18:45:21.742316    8307 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I0531 18:45:21.742370    8307 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I0531 18:45:21.742471    8307 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I0531 18:45:21.742543    8307 kubeadm.go:322] CGROUPS_PIDS: enabled
	I0531 18:45:21.742612    8307 kubeadm.go:322] CGROUPS_HUGETLB: enabled
	I0531 18:45:21.742679    8307 kubeadm.go:322] CGROUPS_BLKIO: enabled
	I0531 18:45:21.742788    8307 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0531 18:45:21.742955    8307 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0531 18:45:21.743059    8307 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0531 18:45:21.743132    8307 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0531 18:45:21.744938    8307 out.go:204]   - Generating certificates and keys ...
	I0531 18:45:21.745041    8307 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0531 18:45:21.745105    8307 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0531 18:45:21.745175    8307 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0531 18:45:21.745237    8307 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0531 18:45:21.745300    8307 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0531 18:45:21.745351    8307 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0531 18:45:21.745407    8307 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0531 18:45:21.745520    8307 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [addons-748280 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0531 18:45:21.745576    8307 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0531 18:45:21.745687    8307 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [addons-748280 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0531 18:45:21.745753    8307 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0531 18:45:21.745816    8307 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0531 18:45:21.745864    8307 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0531 18:45:21.745921    8307 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0531 18:45:21.745984    8307 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0531 18:45:21.746036    8307 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0531 18:45:21.746103    8307 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0531 18:45:21.746160    8307 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0531 18:45:21.746260    8307 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0531 18:45:21.746344    8307 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0531 18:45:21.746384    8307 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0531 18:45:21.746454    8307 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0531 18:45:21.748644    8307 out.go:204]   - Booting up control plane ...
	I0531 18:45:21.748789    8307 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0531 18:45:21.748876    8307 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0531 18:45:21.748975    8307 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0531 18:45:21.749067    8307 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0531 18:45:21.749260    8307 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0531 18:45:21.749357    8307 kubeadm.go:322] [apiclient] All control plane components are healthy after 7.502867 seconds
	I0531 18:45:21.749490    8307 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0531 18:45:21.749655    8307 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0531 18:45:21.749730    8307 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0531 18:45:21.749941    8307 kubeadm.go:322] [mark-control-plane] Marking the node addons-748280 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0531 18:45:21.750026    8307 kubeadm.go:322] [bootstrap-token] Using token: 9v29xc.k4dxqpcgpeqcxvgr
	I0531 18:45:21.751735    8307 out.go:204]   - Configuring RBAC rules ...
	I0531 18:45:21.751852    8307 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0531 18:45:21.751939    8307 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0531 18:45:21.752080    8307 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0531 18:45:21.752208    8307 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0531 18:45:21.752321    8307 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0531 18:45:21.752428    8307 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0531 18:45:21.752545    8307 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0531 18:45:21.752590    8307 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0531 18:45:21.752638    8307 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0531 18:45:21.752647    8307 kubeadm.go:322] 
	I0531 18:45:21.752704    8307 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0531 18:45:21.752711    8307 kubeadm.go:322] 
	I0531 18:45:21.752784    8307 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0531 18:45:21.752792    8307 kubeadm.go:322] 
	I0531 18:45:21.752818    8307 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0531 18:45:21.752876    8307 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0531 18:45:21.752928    8307 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0531 18:45:21.752936    8307 kubeadm.go:322] 
	I0531 18:45:21.752987    8307 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0531 18:45:21.752993    8307 kubeadm.go:322] 
	I0531 18:45:21.753038    8307 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0531 18:45:21.753046    8307 kubeadm.go:322] 
	I0531 18:45:21.753096    8307 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0531 18:45:21.753170    8307 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0531 18:45:21.753238    8307 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0531 18:45:21.753246    8307 kubeadm.go:322] 
	I0531 18:45:21.753325    8307 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0531 18:45:21.753401    8307 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0531 18:45:21.753409    8307 kubeadm.go:322] 
	I0531 18:45:21.753488    8307 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 9v29xc.k4dxqpcgpeqcxvgr \
	I0531 18:45:21.753589    8307 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:8cab06d0df96335aa364fa490e6822dfb6067993e3a4e02e6ce54947f37c8db2 \
	I0531 18:45:21.753610    8307 kubeadm.go:322] 	--control-plane 
	I0531 18:45:21.753619    8307 kubeadm.go:322] 
	I0531 18:45:21.753698    8307 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0531 18:45:21.753707    8307 kubeadm.go:322] 
	I0531 18:45:21.753784    8307 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 9v29xc.k4dxqpcgpeqcxvgr \
	I0531 18:45:21.753900    8307 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:8cab06d0df96335aa364fa490e6822dfb6067993e3a4e02e6ce54947f37c8db2 
	I0531 18:45:21.753912    8307 cni.go:84] Creating CNI manager for ""
	I0531 18:45:21.753919    8307 cni.go:142] "docker" driver + "crio" runtime found, recommending kindnet
	I0531 18:45:21.755571    8307 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0531 18:45:21.757225    8307 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0531 18:45:21.774931    8307 cni.go:181] applying CNI manifest using /var/lib/minikube/binaries/v1.27.2/kubectl ...
	I0531 18:45:21.774957    8307 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0531 18:45:21.838508    8307 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0531 18:45:22.759068    8307 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0531 18:45:22.759191    8307 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:45:22.759267    8307 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl label nodes minikube.k8s.io/version=v1.30.1 minikube.k8s.io/commit=7022875d4a054c2d518e5e5a7b9d500799d50140 minikube.k8s.io/name=addons-748280 minikube.k8s.io/updated_at=2023_05_31T18_45_22_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:45:22.964637    8307 ops.go:34] apiserver oom_adj: -16
	I0531 18:45:22.964721    8307 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:45:23.601498    8307 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:45:24.101513    8307 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:45:24.601590    8307 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:45:25.101626    8307 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:45:25.600916    8307 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:45:26.101546    8307 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:45:26.601381    8307 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:45:27.101618    8307 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:45:27.601855    8307 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:45:28.101801    8307 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:45:28.601430    8307 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:45:29.100961    8307 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:45:29.600906    8307 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:45:30.101618    8307 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:45:30.601011    8307 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:45:31.100970    8307 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:45:31.600858    8307 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:45:32.100910    8307 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:45:32.601548    8307 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:45:33.101488    8307 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:45:33.600908    8307 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:45:34.101578    8307 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:45:34.601367    8307 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:45:34.717975    8307 kubeadm.go:1076] duration metric: took 11.95883055s to wait for elevateKubeSystemPrivileges.
	I0531 18:45:34.718007    8307 kubeadm.go:406] StartCluster complete in 29.525553835s
	I0531 18:45:34.718023    8307 settings.go:142] acquiring lock: {Name:mk7112454687e7bda5617b0aa762b583179f0f78 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 18:45:34.718179    8307 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/16569-2389/kubeconfig
	I0531 18:45:34.719378    8307 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16569-2389/kubeconfig: {Name:mk0c7b1a200a0a97aa7bf4307790fd99336ec425 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 18:45:34.721085    8307 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0531 18:45:34.722158    8307 addons.go:496] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:true]
	I0531 18:45:34.722396    8307 addons.go:66] Setting volumesnapshots=true in profile "addons-748280"
	I0531 18:45:34.722422    8307 addons.go:228] Setting addon volumesnapshots=true in "addons-748280"
	I0531 18:45:34.722474    8307 host.go:66] Checking if "addons-748280" exists ...
	I0531 18:45:34.723365    8307 cli_runner.go:164] Run: docker container inspect addons-748280 --format={{.State.Status}}
	I0531 18:45:34.724770    8307 addons.go:66] Setting gcp-auth=true in profile "addons-748280"
	I0531 18:45:34.724807    8307 mustload.go:65] Loading cluster: addons-748280
	I0531 18:45:34.725139    8307 config.go:182] Loaded profile config "addons-748280": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.2
	I0531 18:45:34.725527    8307 cli_runner.go:164] Run: docker container inspect addons-748280 --format={{.State.Status}}
	I0531 18:45:34.725895    8307 config.go:182] Loaded profile config "addons-748280": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.2
	I0531 18:45:34.725958    8307 addons.go:66] Setting cloud-spanner=true in profile "addons-748280"
	I0531 18:45:34.725980    8307 addons.go:228] Setting addon cloud-spanner=true in "addons-748280"
	I0531 18:45:34.726035    8307 host.go:66] Checking if "addons-748280" exists ...
	I0531 18:45:34.726709    8307 cli_runner.go:164] Run: docker container inspect addons-748280 --format={{.State.Status}}
	I0531 18:45:34.742054    8307 addons.go:66] Setting csi-hostpath-driver=true in profile "addons-748280"
	I0531 18:45:34.742146    8307 addons.go:228] Setting addon csi-hostpath-driver=true in "addons-748280"
	I0531 18:45:34.742205    8307 host.go:66] Checking if "addons-748280" exists ...
	I0531 18:45:34.742960    8307 cli_runner.go:164] Run: docker container inspect addons-748280 --format={{.State.Status}}
	I0531 18:45:34.756907    8307 addons.go:66] Setting default-storageclass=true in profile "addons-748280"
	I0531 18:45:34.756962    8307 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-748280"
	I0531 18:45:34.757495    8307 cli_runner.go:164] Run: docker container inspect addons-748280 --format={{.State.Status}}
	I0531 18:45:34.757749    8307 addons.go:66] Setting ingress=true in profile "addons-748280"
	I0531 18:45:34.757801    8307 addons.go:228] Setting addon ingress=true in "addons-748280"
	I0531 18:45:34.757899    8307 host.go:66] Checking if "addons-748280" exists ...
	I0531 18:45:34.758615    8307 cli_runner.go:164] Run: docker container inspect addons-748280 --format={{.State.Status}}
	I0531 18:45:34.758875    8307 addons.go:66] Setting ingress-dns=true in profile "addons-748280"
	I0531 18:45:34.758903    8307 addons.go:228] Setting addon ingress-dns=true in "addons-748280"
	I0531 18:45:34.758984    8307 host.go:66] Checking if "addons-748280" exists ...
	I0531 18:45:34.759645    8307 cli_runner.go:164] Run: docker container inspect addons-748280 --format={{.State.Status}}
	I0531 18:45:34.763025    8307 addons.go:66] Setting inspektor-gadget=true in profile "addons-748280"
	I0531 18:45:34.763058    8307 addons.go:228] Setting addon inspektor-gadget=true in "addons-748280"
	I0531 18:45:34.763203    8307 host.go:66] Checking if "addons-748280" exists ...
	I0531 18:45:34.777113    8307 cli_runner.go:164] Run: docker container inspect addons-748280 --format={{.State.Status}}
	I0531 18:45:34.799579    8307 addons.go:66] Setting metrics-server=true in profile "addons-748280"
	I0531 18:45:34.799620    8307 addons.go:228] Setting addon metrics-server=true in "addons-748280"
	I0531 18:45:34.799699    8307 host.go:66] Checking if "addons-748280" exists ...
	I0531 18:45:34.800348    8307 cli_runner.go:164] Run: docker container inspect addons-748280 --format={{.State.Status}}
	I0531 18:45:34.800514    8307 addons.go:66] Setting registry=true in profile "addons-748280"
	I0531 18:45:34.800530    8307 addons.go:228] Setting addon registry=true in "addons-748280"
	I0531 18:45:34.800574    8307 host.go:66] Checking if "addons-748280" exists ...
	I0531 18:45:34.801066    8307 cli_runner.go:164] Run: docker container inspect addons-748280 --format={{.State.Status}}
	I0531 18:45:34.801160    8307 addons.go:66] Setting storage-provisioner=true in profile "addons-748280"
	I0531 18:45:34.801169    8307 addons.go:228] Setting addon storage-provisioner=true in "addons-748280"
	I0531 18:45:34.801207    8307 host.go:66] Checking if "addons-748280" exists ...
	I0531 18:45:34.801666    8307 cli_runner.go:164] Run: docker container inspect addons-748280 --format={{.State.Status}}
	I0531 18:45:34.843678    8307 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0531 18:45:34.847907    8307 addons.go:420] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0531 18:45:34.847940    8307 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0531 18:45:34.848028    8307 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-748280
	I0531 18:45:34.854229    8307 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.5
	I0531 18:45:34.856396    8307 addons.go:420] installing /etc/kubernetes/addons/deployment.yaml
	I0531 18:45:34.856416    8307 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1003 bytes)
	I0531 18:45:34.856507    8307 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-748280
	I0531 18:45:34.859953    8307 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0531 18:45:34.864384    8307 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0531 18:45:34.866240    8307 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0531 18:45:34.867941    8307 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0531 18:45:34.871142    8307 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0531 18:45:34.872755    8307 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0531 18:45:34.890996    8307 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0531 18:45:34.895486    8307 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0531 18:45:34.897367    8307 addons.go:420] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0531 18:45:34.897391    8307 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0531 18:45:34.897457    8307 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-748280
	I0531 18:45:35.034638    8307 out.go:177]   - Using image docker.io/registry:2.8.1
	I0531 18:45:35.028066    8307 host.go:66] Checking if "addons-748280" exists ...
	I0531 18:45:35.048115    8307 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.6.3
	I0531 18:45:35.049916    8307 addons.go:420] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0531 18:45:35.049964    8307 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0531 18:45:35.050059    8307 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-748280
	I0531 18:45:35.048126    8307 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.7.0
	I0531 18:45:35.048134    8307 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.16.1
	I0531 18:45:35.066152    8307 addons.go:420] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0531 18:45:35.066181    8307 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0531 18:45:35.066270    8307 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-748280
	I0531 18:45:35.062895    8307 out.go:177]   - Using image gcr.io/google_containers/kube-registry-proxy:0.4
	I0531 18:45:35.077212    8307 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I0531 18:45:35.082693    8307 addons.go:420] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0531 18:45:35.082831    8307 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0531 18:45:35.083049    8307 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-748280
	I0531 18:45:35.080231    8307 addons.go:420] installing /etc/kubernetes/addons/registry-rc.yaml
	I0531 18:45:35.083373    8307 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I0531 18:45:35.083461    8307 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-748280
	I0531 18:45:35.087743    8307 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20230312-helm-chart-4.5.2-28-g66a760794
	I0531 18:45:35.086421    8307 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.27.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0531 18:45:35.096188    8307 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20230312-helm-chart-4.5.2-28-g66a760794
	I0531 18:45:35.098367    8307 addons.go:420] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0531 18:45:35.098425    8307 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16145 bytes)
	I0531 18:45:35.098522    8307 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-748280
	I0531 18:45:35.109057    8307 addons.go:228] Setting addon default-storageclass=true in "addons-748280"
	I0531 18:45:35.109166    8307 host.go:66] Checking if "addons-748280" exists ...
	I0531 18:45:35.109795    8307 cli_runner.go:164] Run: docker container inspect addons-748280 --format={{.State.Status}}
	I0531 18:45:35.134953    8307 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/16569-2389/.minikube/machines/addons-748280/id_rsa Username:docker}
	I0531 18:45:35.154085    8307 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/16569-2389/.minikube/machines/addons-748280/id_rsa Username:docker}
	I0531 18:45:35.173325    8307 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/16569-2389/.minikube/machines/addons-748280/id_rsa Username:docker}
	I0531 18:45:35.202452    8307 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/16569-2389/.minikube/machines/addons-748280/id_rsa Username:docker}
	I0531 18:45:35.228466    8307 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0531 18:45:35.239095    8307 addons.go:420] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0531 18:45:35.239114    8307 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0531 18:45:35.239177    8307 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-748280
	I0531 18:45:35.240831    8307 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/16569-2389/.minikube/machines/addons-748280/id_rsa Username:docker}
	I0531 18:45:35.312438    8307 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/16569-2389/.minikube/machines/addons-748280/id_rsa Username:docker}
	I0531 18:45:35.331207    8307 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/16569-2389/.minikube/machines/addons-748280/id_rsa Username:docker}
	I0531 18:45:35.392830    8307 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/16569-2389/.minikube/machines/addons-748280/id_rsa Username:docker}
	I0531 18:45:35.394504    8307 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/16569-2389/.minikube/machines/addons-748280/id_rsa Username:docker}
	I0531 18:45:35.401783    8307 addons.go:420] installing /etc/kubernetes/addons/storageclass.yaml
	I0531 18:45:35.401804    8307 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0531 18:45:35.401865    8307 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-748280
	I0531 18:45:35.456019    8307 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/16569-2389/.minikube/machines/addons-748280/id_rsa Username:docker}
	I0531 18:45:35.604341    8307 addons.go:420] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0531 18:45:35.604378    8307 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0531 18:45:35.610835    8307 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0531 18:45:35.672621    8307 addons.go:420] installing /etc/kubernetes/addons/registry-svc.yaml
	I0531 18:45:35.672640    8307 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0531 18:45:35.676037    8307 addons.go:420] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0531 18:45:35.676055    8307 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0531 18:45:35.723105    8307 addons.go:420] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0531 18:45:35.723174    8307 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0531 18:45:35.750031    8307 addons.go:420] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0531 18:45:35.750095    8307 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0531 18:45:35.755152    8307 addons.go:420] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0531 18:45:35.755215    8307 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0531 18:45:35.839029    8307 addons.go:420] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0531 18:45:35.839050    8307 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0531 18:45:35.870498    8307 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0531 18:45:35.889628    8307 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0531 18:45:35.912039    8307 addons.go:420] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0531 18:45:35.912058    8307 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (950 bytes)
	I0531 18:45:35.927616    8307 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0531 18:45:35.931936    8307 addons.go:420] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0531 18:45:35.932004    8307 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0531 18:45:35.961281    8307 addons.go:420] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0531 18:45:35.961339    8307 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0531 18:45:35.982944    8307 addons.go:420] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0531 18:45:35.983001    8307 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0531 18:45:36.035259    8307 addons.go:420] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0531 18:45:36.035318    8307 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0531 18:45:36.079408    8307 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0531 18:45:36.093709    8307 addons.go:420] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0531 18:45:36.093775    8307 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0531 18:45:36.123904    8307 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0531 18:45:36.144705    8307 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0531 18:45:36.148012    8307 addons.go:420] installing /etc/kubernetes/addons/ig-role.yaml
	I0531 18:45:36.148037    8307 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0531 18:45:36.189986    8307 addons.go:420] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0531 18:45:36.190012    8307 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0531 18:45:36.221682    8307 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0531 18:45:36.303622    8307 addons.go:420] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0531 18:45:36.303647    8307 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0531 18:45:36.324004    8307 addons.go:420] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0531 18:45:36.324029    8307 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0531 18:45:36.464543    8307 addons.go:420] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0531 18:45:36.464574    8307 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0531 18:45:36.483044    8307 addons.go:420] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0531 18:45:36.483068    8307 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0531 18:45:36.583047    8307 addons.go:420] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0531 18:45:36.583115    8307 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0531 18:45:36.604836    8307 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-748280" context rescaled to 1 replicas
	I0531 18:45:36.604922    8307 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0531 18:45:36.608052    8307 out.go:177] * Verifying Kubernetes components...
	I0531 18:45:36.609755    8307 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0531 18:45:36.631756    8307 addons.go:420] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0531 18:45:36.631818    8307 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0531 18:45:36.723155    8307 addons.go:420] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0531 18:45:36.723336    8307 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0531 18:45:36.723319    8307 addons.go:420] installing /etc/kubernetes/addons/ig-crd.yaml
	I0531 18:45:36.723419    8307 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0531 18:45:36.764383    8307 addons.go:420] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0531 18:45:36.764402    8307 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7741 bytes)
	I0531 18:45:36.771995    8307 addons.go:420] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0531 18:45:36.772015    8307 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0531 18:45:36.801659    8307 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0531 18:45:36.834383    8307 addons.go:420] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0531 18:45:36.834407    8307 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0531 18:45:37.026676    8307 addons.go:420] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0531 18:45:37.026701    8307 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0531 18:45:37.181870    8307 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0531 18:45:38.305720    8307 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.27.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (3.211769898s)
	I0531 18:45:38.305749    8307 start.go:916] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0531 18:45:38.948570    8307 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (3.337703875s)
	I0531 18:45:40.694373    8307 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (4.823809012s)
	I0531 18:45:40.694414    8307 addons.go:464] Verifying addon ingress=true in "addons-748280"
	I0531 18:45:40.696538    8307 out.go:177] * Verifying ingress addon...
	I0531 18:45:40.694597    8307 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (4.804949432s)
	I0531 18:45:40.694726    8307 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.767089735s)
	I0531 18:45:40.694774    8307 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.615309889s)
	I0531 18:45:40.694810    8307 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (4.570880582s)
	I0531 18:45:40.694882    8307 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.550151931s)
	I0531 18:45:40.694997    8307 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (4.085186488s)
	I0531 18:45:40.695051    8307 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (3.893323211s)
	I0531 18:45:40.695237    8307 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (4.473255497s)
	W0531 18:45:40.698396    8307 addons.go:446] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0531 18:45:40.698451    8307 retry.go:31] will retry after 134.338127ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0531 18:45:40.699398    8307 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0531 18:45:40.699653    8307 addons.go:464] Verifying addon registry=true in "addons-748280"
	I0531 18:45:40.702919    8307 out.go:177] * Verifying registry addon...
	I0531 18:45:40.700042    8307 addons.go:464] Verifying addon metrics-server=true in "addons-748280"
	I0531 18:45:40.700911    8307 node_ready.go:35] waiting up to 6m0s for node "addons-748280" to be "Ready" ...
	I0531 18:45:40.705418    8307 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0531 18:45:40.727177    8307 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0531 18:45:40.727207    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:45:40.728985    8307 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I0531 18:45:40.729012    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:45:40.833001    8307 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0531 18:45:41.110433    8307 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (3.928497816s)
	I0531 18:45:41.110506    8307 addons.go:464] Verifying addon csi-hostpath-driver=true in "addons-748280"
	I0531 18:45:41.112636    8307 out.go:177] * Verifying csi-hostpath-driver addon...
	I0531 18:45:41.115915    8307 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0531 18:45:41.151445    8307 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0531 18:45:41.151512    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:45:41.233867    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:45:41.238589    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:45:41.670115    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:45:41.787279    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:45:41.788576    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:45:42.166273    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:45:42.237217    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:45:42.241429    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:45:42.644305    8307 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.811186922s)
	I0531 18:45:42.659322    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:45:42.734239    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:45:42.735504    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:45:42.742402    8307 node_ready.go:58] node "addons-748280" has status "Ready":"False"
	I0531 18:45:43.158489    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:45:43.172762    8307 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0531 18:45:43.172853    8307 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-748280
	I0531 18:45:43.218852    8307 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/16569-2389/.minikube/machines/addons-748280/id_rsa Username:docker}
	I0531 18:45:43.235140    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:45:43.244382    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:45:43.408630    8307 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0531 18:45:43.487002    8307 addons.go:228] Setting addon gcp-auth=true in "addons-748280"
	I0531 18:45:43.487051    8307 host.go:66] Checking if "addons-748280" exists ...
	I0531 18:45:43.487497    8307 cli_runner.go:164] Run: docker container inspect addons-748280 --format={{.State.Status}}
	I0531 18:45:43.516870    8307 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0531 18:45:43.516919    8307 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-748280
	I0531 18:45:43.555943    8307 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/16569-2389/.minikube/machines/addons-748280/id_rsa Username:docker}
	I0531 18:45:43.656802    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:45:43.677857    8307 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20230312-helm-chart-4.5.2-28-g66a760794
	I0531 18:45:43.679856    8307 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.0
	I0531 18:45:43.682015    8307 addons.go:420] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0531 18:45:43.682078    8307 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0531 18:45:43.714093    8307 addons.go:420] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0531 18:45:43.714167    8307 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0531 18:45:43.732623    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:45:43.736905    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:45:43.755717    8307 addons.go:420] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0531 18:45:43.755791    8307 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5474 bytes)
	I0531 18:45:43.787125    8307 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0531 18:45:44.178117    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:45:44.261992    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:45:44.263188    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:45:44.689605    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:45:44.771540    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:45:44.772175    8307 node_ready.go:58] node "addons-748280" has status "Ready":"False"
	I0531 18:45:44.773306    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:45:45.164683    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:45:45.241659    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:45:45.250793    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:45:45.657018    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:45:45.743220    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:45:45.750116    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:45:46.166120    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:45:46.249888    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:45:46.253644    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:45:46.530332    8307 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (2.742831399s)
	I0531 18:45:46.532628    8307 addons.go:464] Verifying addon gcp-auth=true in "addons-748280"
	I0531 18:45:46.535823    8307 out.go:177] * Verifying gcp-auth addon...
	I0531 18:45:46.538218    8307 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0531 18:45:46.546546    8307 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0531 18:45:46.546563    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:45:46.665955    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:45:46.737589    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:45:46.738365    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:45:47.052189    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:45:47.157982    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:45:47.238507    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:45:47.239405    8307 node_ready.go:58] node "addons-748280" has status "Ready":"False"
	I0531 18:45:47.243137    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:45:47.557067    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:45:47.656349    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:45:47.733201    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:45:47.735496    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:45:48.051873    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:45:48.158333    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:45:48.238669    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:45:48.239249    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:45:48.550549    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:45:48.656383    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:45:48.734883    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:45:48.742473    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:45:49.051526    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:45:49.156607    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:45:49.236533    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:45:49.237480    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:45:49.550747    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:45:49.657047    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:45:49.737371    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:45:49.738356    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:45:49.740805    8307 node_ready.go:58] node "addons-748280" has status "Ready":"False"
	I0531 18:45:50.050550    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:45:50.156077    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:45:50.231539    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:45:50.238016    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:45:50.550651    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:45:50.656401    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:45:50.734903    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:45:50.740776    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:45:51.051968    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:45:51.159327    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:45:51.239438    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:45:51.242293    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:45:51.550908    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:45:51.657129    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:45:51.733734    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:45:51.737237    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:45:52.050709    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:45:52.156409    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:45:52.233955    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:45:52.237473    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:45:52.238599    8307 node_ready.go:58] node "addons-748280" has status "Ready":"False"
	I0531 18:45:52.551937    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:45:52.656658    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:45:52.734946    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:45:52.745738    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:45:53.050557    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:45:53.156684    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:45:53.233257    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:45:53.237581    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:45:53.550707    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:45:53.657307    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:45:53.735013    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:45:53.746434    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:45:54.051203    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:45:54.156735    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:45:54.237549    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:45:54.237817    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:45:54.246337    8307 node_ready.go:58] node "addons-748280" has status "Ready":"False"
	I0531 18:45:54.551335    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:45:54.657128    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:45:54.731915    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:45:54.748528    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:45:55.051456    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:45:55.157887    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:45:55.236000    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:45:55.248106    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:45:55.551340    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:45:55.658112    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:45:55.742671    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:45:55.744342    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:45:56.050615    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:45:56.156189    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:45:56.232118    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:45:56.236146    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:45:56.550923    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:45:56.655937    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:45:56.732289    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:45:56.735230    8307 node_ready.go:58] node "addons-748280" has status "Ready":"False"
	I0531 18:45:56.735893    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:45:57.050667    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:45:57.156554    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:45:57.232270    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:45:57.235120    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:45:57.550089    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:45:57.657669    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:45:57.732272    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:45:57.734467    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:45:58.051734    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:45:58.156638    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:45:58.232972    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:45:58.234651    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:45:58.550462    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:45:58.656512    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:45:58.732952    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:45:58.735820    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:45:59.050349    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:45:59.156145    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:45:59.232707    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:45:59.234299    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:45:59.236061    8307 node_ready.go:58] node "addons-748280" has status "Ready":"False"
	I0531 18:45:59.550626    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:45:59.657657    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:45:59.732015    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:45:59.733702    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:46:00.055339    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:46:00.156853    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:46:00.231944    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:46:00.236715    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:46:00.550282    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:46:00.657229    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:46:00.731719    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:46:00.734199    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:46:01.051023    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:46:01.155911    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:46:01.232521    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:46:01.234080    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:46:01.550503    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:46:01.656431    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:46:01.732588    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:46:01.734344    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:46:01.736687    8307 node_ready.go:58] node "addons-748280" has status "Ready":"False"
	I0531 18:46:02.050643    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:46:02.156927    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:46:02.234115    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:46:02.235400    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:46:02.551908    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:46:02.656693    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:46:02.733524    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:46:02.733780    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:46:03.051104    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:46:03.156441    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:46:03.233004    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:46:03.239091    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:46:03.551012    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:46:03.656099    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:46:03.731938    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:46:03.736924    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:46:04.052863    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:46:04.156598    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:46:04.232795    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:46:04.234963    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:46:04.236928    8307 node_ready.go:58] node "addons-748280" has status "Ready":"False"
	I0531 18:46:04.550550    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:46:04.656695    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:46:04.732422    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:46:04.735251    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:46:05.050454    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:46:05.156857    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:46:05.232157    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:46:05.234471    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:46:05.550043    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:46:05.657867    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:46:05.734471    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:46:05.735605    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:46:06.050641    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:46:06.156087    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:46:06.231891    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:46:06.235846    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:46:06.550422    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:46:06.656428    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:46:06.733650    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:46:06.735627    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:46:06.736058    8307 node_ready.go:58] node "addons-748280" has status "Ready":"False"
	I0531 18:46:07.050953    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:46:07.156483    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:46:07.232270    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:46:07.235771    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:46:07.550593    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:46:07.656246    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:46:07.732797    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:46:07.734919    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:46:08.050235    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:46:08.158407    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:46:08.232470    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:46:08.235200    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:46:08.551173    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:46:08.656205    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:46:08.732880    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:46:08.735513    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:46:08.740321    8307 node_ready.go:58] node "addons-748280" has status "Ready":"False"
	I0531 18:46:09.050296    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:46:09.156280    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:46:09.232163    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:46:09.236479    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:46:09.578085    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:46:09.682101    8307 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0531 18:46:09.682128    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:46:09.753869    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:46:09.758724    8307 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0531 18:46:09.758763    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:46:09.763907    8307 node_ready.go:49] node "addons-748280" has status "Ready":"True"
	I0531 18:46:09.763931    8307 node_ready.go:38] duration metric: took 29.060832781s waiting for node "addons-748280" to be "Ready" ...
	I0531 18:46:09.763941    8307 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0531 18:46:09.777484    8307 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5d78c9869d-ctb4p" in "kube-system" namespace to be "Ready" ...
	I0531 18:46:10.079515    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:46:10.175397    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:46:10.234073    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:46:10.238077    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:46:10.551097    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:46:10.659215    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:46:10.732133    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:46:10.735192    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:46:11.053394    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:46:11.164120    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:46:11.243240    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:46:11.245192    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:46:11.551597    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:46:11.691585    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:46:11.740046    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:46:11.755460    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:46:11.809900    8307 pod_ready.go:102] pod "coredns-5d78c9869d-ctb4p" in "kube-system" namespace has status "Ready":"False"
	I0531 18:46:12.051065    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:46:12.157282    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:46:12.244908    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:46:12.245166    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:46:12.563992    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:46:12.660035    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:46:12.736431    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:46:12.738093    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:46:12.803413    8307 pod_ready.go:92] pod "coredns-5d78c9869d-ctb4p" in "kube-system" namespace has status "Ready":"True"
	I0531 18:46:12.803441    8307 pod_ready.go:81] duration metric: took 3.025927803s waiting for pod "coredns-5d78c9869d-ctb4p" in "kube-system" namespace to be "Ready" ...
	I0531 18:46:12.803473    8307 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-748280" in "kube-system" namespace to be "Ready" ...
	I0531 18:46:12.811983    8307 pod_ready.go:92] pod "etcd-addons-748280" in "kube-system" namespace has status "Ready":"True"
	I0531 18:46:12.812016    8307 pod_ready.go:81] duration metric: took 8.52385ms waiting for pod "etcd-addons-748280" in "kube-system" namespace to be "Ready" ...
	I0531 18:46:12.812075    8307 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-748280" in "kube-system" namespace to be "Ready" ...
	I0531 18:46:12.821102    8307 pod_ready.go:92] pod "kube-apiserver-addons-748280" in "kube-system" namespace has status "Ready":"True"
	I0531 18:46:12.821138    8307 pod_ready.go:81] duration metric: took 9.039464ms waiting for pod "kube-apiserver-addons-748280" in "kube-system" namespace to be "Ready" ...
	I0531 18:46:12.821150    8307 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-748280" in "kube-system" namespace to be "Ready" ...
	I0531 18:46:12.828456    8307 pod_ready.go:92] pod "kube-controller-manager-addons-748280" in "kube-system" namespace has status "Ready":"True"
	I0531 18:46:12.828479    8307 pod_ready.go:81] duration metric: took 7.322137ms waiting for pod "kube-controller-manager-addons-748280" in "kube-system" namespace to be "Ready" ...
	I0531 18:46:12.828495    8307 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-k8k6d" in "kube-system" namespace to be "Ready" ...
	I0531 18:46:12.835284    8307 pod_ready.go:92] pod "kube-proxy-k8k6d" in "kube-system" namespace has status "Ready":"True"
	I0531 18:46:12.835307    8307 pod_ready.go:81] duration metric: took 6.805292ms waiting for pod "kube-proxy-k8k6d" in "kube-system" namespace to be "Ready" ...
	I0531 18:46:12.835318    8307 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-748280" in "kube-system" namespace to be "Ready" ...
	I0531 18:46:13.051528    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:46:13.160101    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:46:13.200163    8307 pod_ready.go:92] pod "kube-scheduler-addons-748280" in "kube-system" namespace has status "Ready":"True"
	I0531 18:46:13.200190    8307 pod_ready.go:81] duration metric: took 364.864033ms waiting for pod "kube-scheduler-addons-748280" in "kube-system" namespace to be "Ready" ...
	I0531 18:46:13.200202    8307 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-844d8db974-vjh5j" in "kube-system" namespace to be "Ready" ...
	I0531 18:46:13.234822    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:46:13.239050    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:46:13.551019    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:46:13.658550    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:46:13.735243    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:46:13.739203    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:46:14.055779    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:46:14.159695    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:46:14.241164    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:46:14.249309    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:46:14.553880    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:46:14.677060    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:46:14.740816    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:46:14.741018    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:46:15.058343    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:46:15.183725    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:46:15.240754    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:46:15.243905    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:46:15.551561    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:46:15.611133    8307 pod_ready.go:102] pod "metrics-server-844d8db974-vjh5j" in "kube-system" namespace has status "Ready":"False"
	I0531 18:46:15.659998    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:46:15.737543    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:46:15.738880    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:46:16.063365    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:46:16.125522    8307 pod_ready.go:92] pod "metrics-server-844d8db974-vjh5j" in "kube-system" namespace has status "Ready":"True"
	I0531 18:46:16.125553    8307 pod_ready.go:81] duration metric: took 2.925343008s waiting for pod "metrics-server-844d8db974-vjh5j" in "kube-system" namespace to be "Ready" ...
	I0531 18:46:16.125574    8307 pod_ready.go:38] duration metric: took 6.361621787s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0531 18:46:16.125590    8307 api_server.go:52] waiting for apiserver process to appear ...
	I0531 18:46:16.125667    8307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 18:46:16.141086    8307 api_server.go:72] duration metric: took 39.536120249s to wait for apiserver process to appear ...
	I0531 18:46:16.141155    8307 api_server.go:88] waiting for apiserver healthz status ...
	I0531 18:46:16.141201    8307 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0531 18:46:16.170116    8307 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0531 18:46:16.173939    8307 api_server.go:141] control plane version: v1.27.2
	I0531 18:46:16.174011    8307 api_server.go:131] duration metric: took 32.810734ms to wait for apiserver health ...
	I0531 18:46:16.174033    8307 system_pods.go:43] waiting for kube-system pods to appear ...
	I0531 18:46:16.179486    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:46:16.193847    8307 system_pods.go:59] 17 kube-system pods found
	I0531 18:46:16.193921    8307 system_pods.go:61] "coredns-5d78c9869d-ctb4p" [86b196ff-3fe1-4e1b-baa5-1442e2f87a25] Running
	I0531 18:46:16.193950    8307 system_pods.go:61] "csi-hostpath-attacher-0" [0a38d877-9d43-4001-8be9-5a36ce810f69] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0531 18:46:16.193974    8307 system_pods.go:61] "csi-hostpath-resizer-0" [2fe99aec-2754-4668-8754-d46f38067eb8] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0531 18:46:16.194013    8307 system_pods.go:61] "csi-hostpathplugin-9s7pv" [afdede99-ba58-4f7c-94cc-89e879305e53] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0531 18:46:16.194034    8307 system_pods.go:61] "etcd-addons-748280" [8786107d-987d-4569-8e24-ae449d38c099] Running
	I0531 18:46:16.194069    8307 system_pods.go:61] "kindnet-265l5" [7b84e0aa-879f-4e69-961e-8c4194edd15a] Running
	I0531 18:46:16.194092    8307 system_pods.go:61] "kube-apiserver-addons-748280" [0fca224a-f38d-4262-88b0-2ff337d6f892] Running
	I0531 18:46:16.194113    8307 system_pods.go:61] "kube-controller-manager-addons-748280" [159d31ae-37a4-49b4-8c62-ec30077f09e1] Running
	I0531 18:46:16.194137    8307 system_pods.go:61] "kube-ingress-dns-minikube" [60b6737d-85a4-4be3-a343-c649a32d5573] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0531 18:46:16.194169    8307 system_pods.go:61] "kube-proxy-k8k6d" [756a2b75-7fc4-403d-a71b-951fdaf0092c] Running
	I0531 18:46:16.194193    8307 system_pods.go:61] "kube-scheduler-addons-748280" [8f52249a-2f9c-44a3-8f71-2ba8cf5b3f60] Running
	I0531 18:46:16.194214    8307 system_pods.go:61] "metrics-server-844d8db974-vjh5j" [af201e0f-457a-4fb5-91e6-f01fdfaa6868] Running
	I0531 18:46:16.194238    8307 system_pods.go:61] "registry-6hcmh" [98c2e1ee-6d1b-4140-a410-92c62d5b0c8e] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0531 18:46:16.194272    8307 system_pods.go:61] "registry-proxy-c7bxw" [6510a1f0-5ba2-49e1-8749-6a1b8101c599] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0531 18:46:16.194300    8307 system_pods.go:61] "snapshot-controller-75bbb956b9-b664t" [cff32ef5-f503-4b54-89a5-2fa37f87d544] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0531 18:46:16.194325    8307 system_pods.go:61] "snapshot-controller-75bbb956b9-l8h9p" [4e69779f-d2a9-4e23-949f-626705bea5de] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0531 18:46:16.194348    8307 system_pods.go:61] "storage-provisioner" [6e1cb66c-eee1-4f33-896a-8c80d6c8c213] Running
	I0531 18:46:16.194379    8307 system_pods.go:74] duration metric: took 20.327016ms to wait for pod list to return data ...
	I0531 18:46:16.194407    8307 default_sa.go:34] waiting for default service account to be created ...
	I0531 18:46:16.211502    8307 default_sa.go:45] found service account: "default"
	I0531 18:46:16.211522    8307 default_sa.go:55] duration metric: took 17.098433ms for default service account to be created ...
	I0531 18:46:16.211531    8307 system_pods.go:116] waiting for k8s-apps to be running ...
	I0531 18:46:16.226546    8307 system_pods.go:86] 17 kube-system pods found
	I0531 18:46:16.226638    8307 system_pods.go:89] "coredns-5d78c9869d-ctb4p" [86b196ff-3fe1-4e1b-baa5-1442e2f87a25] Running
	I0531 18:46:16.226663    8307 system_pods.go:89] "csi-hostpath-attacher-0" [0a38d877-9d43-4001-8be9-5a36ce810f69] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0531 18:46:16.226712    8307 system_pods.go:89] "csi-hostpath-resizer-0" [2fe99aec-2754-4668-8754-d46f38067eb8] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0531 18:46:16.226768    8307 system_pods.go:89] "csi-hostpathplugin-9s7pv" [afdede99-ba58-4f7c-94cc-89e879305e53] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0531 18:46:16.226791    8307 system_pods.go:89] "etcd-addons-748280" [8786107d-987d-4569-8e24-ae449d38c099] Running
	I0531 18:46:16.226816    8307 system_pods.go:89] "kindnet-265l5" [7b84e0aa-879f-4e69-961e-8c4194edd15a] Running
	I0531 18:46:16.226852    8307 system_pods.go:89] "kube-apiserver-addons-748280" [0fca224a-f38d-4262-88b0-2ff337d6f892] Running
	I0531 18:46:16.226879    8307 system_pods.go:89] "kube-controller-manager-addons-748280" [159d31ae-37a4-49b4-8c62-ec30077f09e1] Running
	I0531 18:46:16.226904    8307 system_pods.go:89] "kube-ingress-dns-minikube" [60b6737d-85a4-4be3-a343-c649a32d5573] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0531 18:46:16.226939    8307 system_pods.go:89] "kube-proxy-k8k6d" [756a2b75-7fc4-403d-a71b-951fdaf0092c] Running
	I0531 18:46:16.226963    8307 system_pods.go:89] "kube-scheduler-addons-748280" [8f52249a-2f9c-44a3-8f71-2ba8cf5b3f60] Running
	I0531 18:46:16.226987    8307 system_pods.go:89] "metrics-server-844d8db974-vjh5j" [af201e0f-457a-4fb5-91e6-f01fdfaa6868] Running
	I0531 18:46:16.227024    8307 system_pods.go:89] "registry-6hcmh" [98c2e1ee-6d1b-4140-a410-92c62d5b0c8e] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0531 18:46:16.227050    8307 system_pods.go:89] "registry-proxy-c7bxw" [6510a1f0-5ba2-49e1-8749-6a1b8101c599] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0531 18:46:16.227079    8307 system_pods.go:89] "snapshot-controller-75bbb956b9-b664t" [cff32ef5-f503-4b54-89a5-2fa37f87d544] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0531 18:46:16.227122    8307 system_pods.go:89] "snapshot-controller-75bbb956b9-l8h9p" [4e69779f-d2a9-4e23-949f-626705bea5de] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0531 18:46:16.227145    8307 system_pods.go:89] "storage-provisioner" [6e1cb66c-eee1-4f33-896a-8c80d6c8c213] Running
	I0531 18:46:16.227179    8307 system_pods.go:126] duration metric: took 15.642141ms to wait for k8s-apps to be running ...
	I0531 18:46:16.227206    8307 system_svc.go:44] waiting for kubelet service to be running ....
	I0531 18:46:16.227287    8307 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0531 18:46:16.251604    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:46:16.252052    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:46:16.266019    8307 system_svc.go:56] duration metric: took 38.804735ms WaitForService to wait for kubelet.
	I0531 18:46:16.266117    8307 kubeadm.go:581] duration metric: took 39.661154429s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0531 18:46:16.266151    8307 node_conditions.go:102] verifying NodePressure condition ...
	I0531 18:46:16.399440    8307 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0531 18:46:16.399514    8307 node_conditions.go:123] node cpu capacity is 2
	I0531 18:46:16.399539    8307 node_conditions.go:105] duration metric: took 133.367205ms to run NodePressure ...
	I0531 18:46:16.399565    8307 start.go:228] waiting for startup goroutines ...
	I0531 18:46:16.553389    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:46:16.658704    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:46:16.734365    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:46:16.737378    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:46:17.051177    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:46:17.183110    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:46:17.233267    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:46:17.237068    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:46:17.551315    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:46:17.658042    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:46:17.733840    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:46:17.735063    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:46:18.051186    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:46:18.158058    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:46:18.232998    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:46:18.234769    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:46:18.550532    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:46:18.659567    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:46:18.735526    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:46:18.737436    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:46:19.051179    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:46:19.160000    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:46:19.235351    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:46:19.243917    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:46:19.550563    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:46:19.668269    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:46:19.733796    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:46:19.736989    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:46:20.051160    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:46:20.166318    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:46:20.237507    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:46:20.243875    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:46:20.552967    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:46:20.680633    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:46:20.733959    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:46:20.751618    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:46:21.050571    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:46:21.160308    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:46:21.235061    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:46:21.236409    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:46:21.551140    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:46:21.658240    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:46:21.732551    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:46:21.737522    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:46:22.050762    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:46:22.173626    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:46:22.235839    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:46:22.240780    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:46:22.551530    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:46:22.657992    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:46:22.737418    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:46:22.739002    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:46:23.054433    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:46:23.160253    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:46:23.237371    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:46:23.240106    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:46:23.551016    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:46:23.657790    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:46:23.733126    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:46:23.734233    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:46:24.050304    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:46:24.158866    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:46:24.234991    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:46:24.239373    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:46:24.550943    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:46:24.659213    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:46:24.734040    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:46:24.737474    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:46:25.051564    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:46:25.159341    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:46:25.232972    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:46:25.236106    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:46:25.551987    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:46:25.663678    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:46:25.734074    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:46:25.735497    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:46:26.050774    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:46:26.157484    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:46:26.232957    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:46:26.234932    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:46:26.555240    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:46:26.657702    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:46:26.732109    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:46:26.734871    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:46:27.050816    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:46:27.161727    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:46:27.233765    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:46:27.236094    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:46:27.571929    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:46:27.658538    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:46:27.738056    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:46:27.742304    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:46:28.053089    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:46:28.157431    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:46:28.233128    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:46:28.235370    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:46:28.557725    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:46:28.659333    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:46:28.736113    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:46:28.736974    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:46:29.050286    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:46:29.159577    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:46:29.233167    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:46:29.234830    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:46:29.551546    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:46:29.662858    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:46:29.732865    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:46:29.741778    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:46:30.051128    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:46:30.158918    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:46:30.234392    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:46:30.235004    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:46:30.563404    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:46:30.659273    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:46:30.732577    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:46:30.735690    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:46:31.061799    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:46:31.160162    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:46:31.233787    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:46:31.239814    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:46:31.564743    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:46:31.659312    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:46:31.734853    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:46:31.735911    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:46:32.051122    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:46:32.157418    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:46:32.232792    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:46:32.234958    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:46:32.562647    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:46:32.657454    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:46:32.732467    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:46:32.735730    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:46:33.051290    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:46:33.159188    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:46:33.233996    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:46:33.235700    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:46:33.551637    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:46:33.658205    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:46:33.734094    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:46:33.739091    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:46:34.056308    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:46:34.160029    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:46:34.235193    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:46:34.237328    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:46:34.552462    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:46:34.661315    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:46:34.736594    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:46:34.739509    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:46:35.058771    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:46:35.157932    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:46:35.235006    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:46:35.237145    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:46:35.551212    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:46:35.659222    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:46:35.734392    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:46:35.735376    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:46:36.050420    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:46:36.157150    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:46:36.232279    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:46:36.233969    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:46:36.552806    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:46:36.657397    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:46:36.732451    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:46:36.734419    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:46:37.050235    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:46:37.160711    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:46:37.236046    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:46:37.236605    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:46:37.552859    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:46:37.661128    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:46:37.733802    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:46:37.735992    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:46:38.052289    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:46:38.158373    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:46:38.234299    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:46:38.247583    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:46:38.551776    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:46:38.660588    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:46:38.739621    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:46:38.741467    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:46:39.051057    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:46:39.157925    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:46:39.234321    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:46:39.238121    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:46:39.551629    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:46:39.658440    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:46:39.738175    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:46:39.743631    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:46:40.050920    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:46:40.160331    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:46:40.237232    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:46:40.243428    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:46:40.555471    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:46:40.663499    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:46:40.763284    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:46:40.764712    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:46:41.050299    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:46:41.158779    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:46:41.235302    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:46:41.235543    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:46:41.551836    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:46:41.676907    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:46:41.737050    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:46:41.740068    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:46:42.053359    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:46:42.158060    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:46:42.234829    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:46:42.235610    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:46:42.550639    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:46:42.665889    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:46:42.735436    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:46:42.737664    8307 kapi.go:107] duration metric: took 1m2.032236678s to wait for kubernetes.io/minikube-addons=registry ...
	I0531 18:46:43.051537    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:46:43.157421    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:46:43.232914    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:46:43.551207    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:46:43.665675    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:46:43.733001    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:46:44.051109    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:46:44.157930    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:46:44.232722    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:46:44.550673    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:46:44.661695    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:46:44.732264    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:46:45.051711    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:46:45.229756    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:46:45.238779    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:46:45.556867    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:46:45.661534    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:46:45.732365    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:46:46.050564    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:46:46.160027    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:46:46.232555    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:46:46.555709    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:46:46.657509    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:46:46.732664    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:46:47.050863    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:46:47.158041    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:46:47.232611    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:46:47.551129    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:46:47.658262    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:46:47.732579    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:46:48.051233    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:46:48.159520    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:46:48.233564    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:46:48.551505    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:46:48.658796    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:46:48.736695    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:46:49.051314    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:46:49.158701    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:46:49.232895    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:46:49.552543    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:46:49.669533    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:46:49.732530    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:46:50.055478    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:46:50.158677    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:46:50.232192    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:46:50.551419    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:46:50.657979    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:46:50.736617    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:46:51.052102    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:46:51.158316    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:46:51.234123    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:46:51.552673    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:46:51.659619    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:46:51.732450    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:46:52.055569    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:46:52.164012    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:46:52.232952    8307 kapi.go:107] duration metric: took 1m11.533548509s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0531 18:46:52.551275    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:46:52.659675    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:46:53.050135    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:46:53.159574    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:46:53.550256    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:46:53.659122    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:46:54.051221    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:46:54.157677    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:46:54.552445    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:46:54.660638    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:46:55.060952    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:46:55.158617    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:46:55.551250    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:46:55.658225    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:46:56.051879    8307 kapi.go:107] duration metric: took 1m9.513662148s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0531 18:46:56.053830    8307 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-748280 cluster.
	I0531 18:46:56.056114    8307 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0531 18:46:56.058020    8307 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0531 18:46:56.157494    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:46:56.660134    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:46:57.157309    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:46:57.656985    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:46:58.157335    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:46:58.658016    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:46:59.162343    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:46:59.657376    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:47:00.157557    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:47:00.661472    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:47:01.158087    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:47:01.657941    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:47:02.158800    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:47:02.665512    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:47:03.158871    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:47:03.658442    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:47:04.157700    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:47:04.657756    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:47:05.163217    8307 kapi.go:107] duration metric: took 1m24.047306906s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0531 18:47:05.165252    8307 out.go:177] * Enabled addons: cloud-spanner, ingress-dns, storage-provisioner, default-storageclass, inspektor-gadget, metrics-server, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I0531 18:47:05.167410    8307 addons.go:499] enable addons completed in 1m30.445258386s: enabled=[cloud-spanner ingress-dns storage-provisioner default-storageclass inspektor-gadget metrics-server volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I0531 18:47:05.167465    8307 start.go:233] waiting for cluster config update ...
	I0531 18:47:05.167488    8307 start.go:242] writing updated cluster config ...
	I0531 18:47:05.167807    8307 ssh_runner.go:195] Run: rm -f paused
	I0531 18:47:05.230133    8307 start.go:573] kubectl: 1.27.2, cluster: 1.27.2 (minor skew: 0)
	I0531 18:47:05.232563    8307 out.go:177] * Done! kubectl is now configured to use "addons-748280" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* May 31 18:49:51 addons-748280 crio[891]: time="2023-05-31 18:49:51.453464718Z" level=info msg="Starting container: 4a2cbe80304edd4ad97fc3f4b969c445e84300b47b8690a3c8e492a701ee5ee8" id=c17bcc51-a699-4a2b-9830-66c43e18ab0c name=/runtime.v1.RuntimeService/StartContainer
	May 31 18:49:51 addons-748280 crio[891]: time="2023-05-31 18:49:51.471754624Z" level=info msg="Started container" PID=6456 containerID=4a2cbe80304edd4ad97fc3f4b969c445e84300b47b8690a3c8e492a701ee5ee8 description=default/hello-world-app-65bdb79f98-vk6p5/hello-world-app id=c17bcc51-a699-4a2b-9830-66c43e18ab0c name=/runtime.v1.RuntimeService/StartContainer sandboxID=bbfaee9e4cdd6a2acb211a89a5a161c75ee1deeb4791e83b97726a5243baa9f4
	May 31 18:49:51 addons-748280 conmon[6443]: conmon 4a2cbe80304edd4ad97f <ninfo>: container 6456 exited with status 1
	May 31 18:49:52 addons-748280 crio[891]: time="2023-05-31 18:49:52.346886623Z" level=info msg="Removing container: 2731786b1cbb356692d60139dee97f50579ae2cbe2664d0a36f2fe862d9c476d" id=1c6e6294-89fa-46e0-abf5-49061fb440b4 name=/runtime.v1.RuntimeService/RemoveContainer
	May 31 18:49:52 addons-748280 crio[891]: time="2023-05-31 18:49:52.372454476Z" level=info msg="Removed container 2731786b1cbb356692d60139dee97f50579ae2cbe2664d0a36f2fe862d9c476d: default/hello-world-app-65bdb79f98-vk6p5/hello-world-app" id=1c6e6294-89fa-46e0-abf5-49061fb440b4 name=/runtime.v1.RuntimeService/RemoveContainer
	May 31 18:50:02 addons-748280 crio[891]: time="2023-05-31 18:50:02.234404841Z" level=info msg="Stopping container: c3bcde53d394e704dc2e4f16505d7e1a5f1ed0ce0180031f9792e3dc9320f842 (timeout: 30s)" id=61f47c0c-2701-457b-ab62-a4b69444ab1a name=/runtime.v1.RuntimeService/StopContainer
	May 31 18:50:02 addons-748280 crio[891]: time="2023-05-31 18:50:02.266408656Z" level=info msg="Stopping pod sandbox: b46e714e34ef669b054c94535b11e7a78ed4feb678e317a1c0150549d95d41c0" id=f6f4171c-5722-4f5b-b670-19879a7ca671 name=/runtime.v1.RuntimeService/StopPodSandbox
	May 31 18:50:02 addons-748280 conmon[3798]: conmon c3bcde53d394e704dc2e <ninfo>: container 3810 exited with status 2
	May 31 18:50:02 addons-748280 crio[891]: time="2023-05-31 18:50:02.279991604Z" level=info msg="Restoring iptables rules: *nat\n:KUBE-HP-WZVDBHUK6DFET5EK - [0:0]\n:KUBE-HP-WCQPKGOSSSVBHIFH - [0:0]\n:KUBE-HOSTPORTS - [0:0]\n:KUBE-HP-4V4CCTTOIKKK7NW7 - [0:0]\n-A KUBE-HOSTPORTS -p tcp -m comment --comment \"k8s_ingress-nginx-controller-858bcd4f57-76948_ingress-nginx_493fae28-0951-494b-9648-19fa4d464abb_0_ hostport 443\" -m tcp --dport 443 -j KUBE-HP-WCQPKGOSSSVBHIFH\n-A KUBE-HOSTPORTS -p tcp -m comment --comment \"k8s_ingress-nginx-controller-858bcd4f57-76948_ingress-nginx_493fae28-0951-494b-9648-19fa4d464abb_0_ hostport 80\" -m tcp --dport 80 -j KUBE-HP-WZVDBHUK6DFET5EK\n-A KUBE-HP-WCQPKGOSSSVBHIFH -s 10.244.0.16/32 -m comment --comment \"k8s_ingress-nginx-controller-858bcd4f57-76948_ingress-nginx_493fae28-0951-494b-9648-19fa4d464abb_0_ hostport 443\" -j KUBE-MARK-MASQ\n-A KUBE-HP-WCQPKGOSSSVBHIFH -p tcp -m comment --comment \"k8s_ingress-nginx-controller-858bcd4f57-76948_ingress-nginx_493fae28-0951-494
b-9648-19fa4d464abb_0_ hostport 443\" -m tcp -j DNAT --to-destination 10.244.0.16:443\n-A KUBE-HP-WZVDBHUK6DFET5EK -s 10.244.0.16/32 -m comment --comment \"k8s_ingress-nginx-controller-858bcd4f57-76948_ingress-nginx_493fae28-0951-494b-9648-19fa4d464abb_0_ hostport 80\" -j KUBE-MARK-MASQ\n-A KUBE-HP-WZVDBHUK6DFET5EK -p tcp -m comment --comment \"k8s_ingress-nginx-controller-858bcd4f57-76948_ingress-nginx_493fae28-0951-494b-9648-19fa4d464abb_0_ hostport 80\" -m tcp -j DNAT --to-destination 10.244.0.16:80\n-X KUBE-HP-4V4CCTTOIKKK7NW7\nCOMMIT\n"
	May 31 18:50:02 addons-748280 crio[891]: time="2023-05-31 18:50:02.386788676Z" level=info msg="Closing host port tcp:5000"
	May 31 18:50:02 addons-748280 crio[891]: time="2023-05-31 18:50:02.391941965Z" level=info msg="Host port tcp:5000 does not have an open socket"
	May 31 18:50:02 addons-748280 crio[891]: time="2023-05-31 18:50:02.392151571Z" level=info msg="Got pod network &{Name:registry-proxy-c7bxw Namespace:kube-system ID:b46e714e34ef669b054c94535b11e7a78ed4feb678e317a1c0150549d95d41c0 UID:6510a1f0-5ba2-49e1-8749-6a1b8101c599 NetNS:/var/run/netns/a1b88bfb-78b5-438a-a1ba-5e1ecee3f232 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	May 31 18:50:02 addons-748280 crio[891]: time="2023-05-31 18:50:02.392306138Z" level=info msg="Deleting pod kube-system_registry-proxy-c7bxw from CNI network \"kindnet\" (type=ptp)"
	May 31 18:50:02 addons-748280 crio[891]: time="2023-05-31 18:50:02.426337391Z" level=info msg="Stopped container c3bcde53d394e704dc2e4f16505d7e1a5f1ed0ce0180031f9792e3dc9320f842: kube-system/registry-6hcmh/registry" id=61f47c0c-2701-457b-ab62-a4b69444ab1a name=/runtime.v1.RuntimeService/StopContainer
	May 31 18:50:02 addons-748280 crio[891]: time="2023-05-31 18:50:02.426954666Z" level=info msg="Stopping pod sandbox: 765cedaf12877f917f109122edc4c2af54f7bf6f6b1cfa996b953f4d5e7f1bea" id=c397e1e5-fb33-4a3e-8c71-f6a701e08fd0 name=/runtime.v1.RuntimeService/StopPodSandbox
	May 31 18:50:02 addons-748280 crio[891]: time="2023-05-31 18:50:02.427195156Z" level=info msg="Got pod network &{Name:registry-6hcmh Namespace:kube-system ID:765cedaf12877f917f109122edc4c2af54f7bf6f6b1cfa996b953f4d5e7f1bea UID:98c2e1ee-6d1b-4140-a410-92c62d5b0c8e NetNS:/var/run/netns/ea612fca-7627-4841-a17c-e14495ff1748 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	May 31 18:50:02 addons-748280 crio[891]: time="2023-05-31 18:50:02.427333732Z" level=info msg="Deleting pod kube-system_registry-6hcmh from CNI network \"kindnet\" (type=ptp)"
	May 31 18:50:02 addons-748280 crio[891]: time="2023-05-31 18:50:02.463630222Z" level=info msg="Stopped pod sandbox: b46e714e34ef669b054c94535b11e7a78ed4feb678e317a1c0150549d95d41c0" id=f6f4171c-5722-4f5b-b670-19879a7ca671 name=/runtime.v1.RuntimeService/StopPodSandbox
	May 31 18:50:02 addons-748280 crio[891]: time="2023-05-31 18:50:02.485110551Z" level=info msg="Stopped pod sandbox: 765cedaf12877f917f109122edc4c2af54f7bf6f6b1cfa996b953f4d5e7f1bea" id=c397e1e5-fb33-4a3e-8c71-f6a701e08fd0 name=/runtime.v1.RuntimeService/StopPodSandbox
	May 31 18:50:03 addons-748280 crio[891]: time="2023-05-31 18:50:03.373580852Z" level=info msg="Removing container: 3072a2ee81fb385d8c0298d94988ddc5cbe068244de65bcf4b11e50d8a5dd38b" id=ae550481-0621-4279-badd-ba97aff65e6f name=/runtime.v1.RuntimeService/RemoveContainer
	May 31 18:50:03 addons-748280 crio[891]: time="2023-05-31 18:50:03.409213085Z" level=info msg="Removed container 3072a2ee81fb385d8c0298d94988ddc5cbe068244de65bcf4b11e50d8a5dd38b: kube-system/registry-proxy-c7bxw/registry-proxy" id=ae550481-0621-4279-badd-ba97aff65e6f name=/runtime.v1.RuntimeService/RemoveContainer
	May 31 18:50:03 addons-748280 crio[891]: time="2023-05-31 18:50:03.410521300Z" level=info msg="Removing container: c3bcde53d394e704dc2e4f16505d7e1a5f1ed0ce0180031f9792e3dc9320f842" id=8fe99d2e-8dd7-4ed7-8e70-7d68c9229579 name=/runtime.v1.RuntimeService/RemoveContainer
	May 31 18:50:03 addons-748280 crio[891]: time="2023-05-31 18:50:03.462107335Z" level=info msg="Removed container c3bcde53d394e704dc2e4f16505d7e1a5f1ed0ce0180031f9792e3dc9320f842: kube-system/registry-6hcmh/registry" id=8fe99d2e-8dd7-4ed7-8e70-7d68c9229579 name=/runtime.v1.RuntimeService/RemoveContainer
	May 31 18:50:03 addons-748280 crio[891]: time="2023-05-31 18:50:03.648511461Z" level=info msg="Stopping pod sandbox: 997c13feeb684d4e24d0c7d1e7f222ef9073c685f027333feca5c93ea3b41fe3" id=4efa151a-1f12-4462-94b5-e50886425f6e name=/runtime.v1.RuntimeService/StopPodSandbox
	May 31 18:50:03 addons-748280 crio[891]: time="2023-05-31 18:50:03.651126275Z" level=info msg="Stopped pod sandbox: 997c13feeb684d4e24d0c7d1e7f222ef9073c685f027333feca5c93ea3b41fe3" id=4efa151a-1f12-4462-94b5-e50886425f6e name=/runtime.v1.RuntimeService/StopPodSandbox
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                                                        CREATED             STATE               NAME                                     ATTEMPT             POD ID              POD
	4a2cbe80304ed       13753a81eccfdd153bf7fc9a4c9198edbcce0110e7f46ed0d38cc654a6458ff5                                                                             12 seconds ago      Exited              hello-world-app                          1                   bbfaee9e4cdd6       hello-world-app-65bdb79f98-vk6p5
	12bcf75c0158a       1499ed4fbd0aa6ea742ab6bce25603aa33556e1ac0e2f24a4901a675247e538a                                                                             38 seconds ago      Exited              minikube-ingress-dns                     5                   997c13feeb684       kube-ingress-dns-minikube
	e002ec67e7538       docker.io/library/nginx@sha256:203cba3f56d7dba1d66b95c091db65a4f0778eb5d16e76151e73e0413e317328                                              2 minutes ago       Running             nginx                                    0                   93b4f1bcdc79b       nginx
	2bee07dab41d4       registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f                          3 minutes ago       Running             csi-snapshotter                          0                   bcb3d8b8be486       csi-hostpathplugin-9s7pv
	2548c3391ee86       registry.k8s.io/sig-storage/csi-provisioner@sha256:98ffd09c0784203d200e0f8c241501de31c8df79644caac7eed61bd6391e5d49                          3 minutes ago       Running             csi-provisioner                          0                   bcb3d8b8be486       csi-hostpathplugin-9s7pv
	db2d7303d3a79       registry.k8s.io/sig-storage/livenessprobe@sha256:8b00c6e8f52639ed9c6f866085893ab688e57879741b3089e3cfa9998502e158                            3 minutes ago       Running             liveness-probe                           0                   bcb3d8b8be486       csi-hostpathplugin-9s7pv
	6d550170d08c3       registry.k8s.io/sig-storage/hostpathplugin@sha256:7b1dfc90a367222067fc468442fdf952e20fc5961f25c1ad654300ddc34d7083                           3 minutes ago       Running             hostpath                                 0                   bcb3d8b8be486       csi-hostpathplugin-9s7pv
	e045b0df91edc       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:63b520448091bc94aa4dba00d6b3b3c25e410c4fb73aa46feae5b25f9895abaa                                 3 minutes ago       Running             gcp-auth                                 0                   ff0fce1de8b98       gcp-auth-58478865f7-tw9f8
	8ef06e620e95e       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:511b8c8ac828194a753909d26555ff08bc12f497dd8daeb83fe9d593693a26c1                3 minutes ago       Running             node-driver-registrar                    0                   bcb3d8b8be486       csi-hostpathplugin-9s7pv
	1c00973904c5e       registry.k8s.io/ingress-nginx/controller@sha256:0ec8b90ac690f5180830f28c73f6850b93a676149ee799dd66c6cde66fba062c                             3 minutes ago       Running             controller                               0                   afa87f5028abe       ingress-nginx-controller-858bcd4f57-76948
	c98b1fa476418       registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7                              3 minutes ago       Running             csi-resizer                              0                   c9d43ab51cc2d       csi-hostpath-resizer-0
	1cea1a842f042       registry.k8s.io/sig-storage/csi-attacher@sha256:4b5609c78455de45821910065281a368d5f760b41250f90cbde5110543bdc326                             3 minutes ago       Running             csi-attacher                             0                   09c0bc5b8fabc       csi-hostpath-attacher-0
	406a88cc1870b       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      3 minutes ago       Running             volume-snapshot-controller               0                   768e636d764b2       snapshot-controller-75bbb956b9-l8h9p
	a60430cca9cf2       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:01d181618f270f2a96c04006f33b2699ad3ccb02da48d0f89b22abce084b292f                   3 minutes ago       Exited              patch                                    0                   2e99185a4db5d       ingress-nginx-admission-patch-l59sb
	46250d9e6cdcb       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:80b9ba94aa2afe24553d69bd165a6a51552d1582d68618ec00d3b804a7d9193c   3 minutes ago       Running             csi-external-health-monitor-controller   0                   bcb3d8b8be486       csi-hostpathplugin-9s7pv
	683dddb99a5ab       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      3 minutes ago       Running             volume-snapshot-controller               0                   fbdd40d7cba51       snapshot-controller-75bbb956b9-b664t
	657cbe8891ab8       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:01d181618f270f2a96c04006f33b2699ad3ccb02da48d0f89b22abce084b292f                   3 minutes ago       Exited              create                                   0                   6913dae0dfe5b       ingress-nginx-admission-create-c4nzk
	7497298e3c635       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108                                                                             3 minutes ago       Running             coredns                                  0                   464dd0ad42868       coredns-5d78c9869d-ctb4p
	2b333ed8ee445       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                                             3 minutes ago       Running             storage-provisioner                      0                   3864ceca696e1       storage-provisioner
	7101bc75e91f2       b18bf71b941bae2e12db1c07e567ad14e4febbc778310a0fc64487f1ac877d79                                                                             4 minutes ago       Running             kindnet-cni                              0                   ae4f61d0fc6b4       kindnet-265l5
	6a504f33e4da1       29921a084542255eb81a1a660a603b1a24636d88b202f9010daae75fa32754c0                                                                             4 minutes ago       Running             kube-proxy                               0                   8ebd4a15353d6       kube-proxy-k8k6d
	ab2df087bab58       24bc64e911039ecf00e263be2161797c758b7d82403ca5516ab64047a477f737                                                                             4 minutes ago       Running             etcd                                     0                   c6ce6c07026b8       etcd-addons-748280
	622e30ad34794       305d7ed1dae2877c3a80d434c5fb9f1aac1aa3d2431c36130a3fcd1970e93840                                                                             4 minutes ago       Running             kube-scheduler                           0                   9ff0f008338df       kube-scheduler-addons-748280
	b2472317bc595       72c9df6be7f1b997e4a31b5cb9aa7262e5278905af97e6a69e341e3f0f9bbaae                                                                             4 minutes ago       Running             kube-apiserver                           0                   e37e812dfa950       kube-apiserver-addons-748280
	12917d226c926       2ee705380c3c59a538b853cbe9ae9886ebbd0001a4cea4add5adeea48e5f48d4                                                                             4 minutes ago       Running             kube-controller-manager                  0                   521b34c7ecfa6       kube-controller-manager-addons-748280
	
	* 
	* ==> coredns [7497298e3c6352e8d10edb8ba8b599bab7afb40455b793ac2276c6c6363b4e7a] <==
	* [INFO] 10.244.0.16:37729 - 50460 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000104401s
	[INFO] 10.244.0.16:37729 - 45711 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.002780909s
	[INFO] 10.244.0.16:39281 - 26161 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.003355305s
	[INFO] 10.244.0.16:39281 - 1392 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.00310898s
	[INFO] 10.244.0.16:37729 - 37566 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.003249845s
	[INFO] 10.244.0.16:39281 - 11750 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000126793s
	[INFO] 10.244.0.16:37729 - 20904 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000248228s
	[INFO] 10.244.0.16:56070 - 563 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000113344s
	[INFO] 10.244.0.16:56070 - 1390 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000073796s
	[INFO] 10.244.0.16:56070 - 14017 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000057903s
	[INFO] 10.244.0.16:56070 - 41081 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000043774s
	[INFO] 10.244.0.16:56070 - 30076 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000060898s
	[INFO] 10.244.0.16:56070 - 52138 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000041936s
	[INFO] 10.244.0.16:56070 - 53464 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001820991s
	[INFO] 10.244.0.16:56070 - 3888 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001032787s
	[INFO] 10.244.0.16:56070 - 7291 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.00006939s
	[INFO] 10.244.0.16:35020 - 40580 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.00016388s
	[INFO] 10.244.0.16:35020 - 46634 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000104967s
	[INFO] 10.244.0.16:35020 - 16568 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000071901s
	[INFO] 10.244.0.16:35020 - 63335 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000050125s
	[INFO] 10.244.0.16:35020 - 47367 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000037957s
	[INFO] 10.244.0.16:35020 - 6719 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.00003991s
	[INFO] 10.244.0.16:35020 - 55846 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001063811s
	[INFO] 10.244.0.16:35020 - 30773 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.000924784s
	[INFO] 10.244.0.16:35020 - 1656 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000054998s
	
	* 
	* ==> describe nodes <==
	* Name:               addons-748280
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-748280
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7022875d4a054c2d518e5e5a7b9d500799d50140
	                    minikube.k8s.io/name=addons-748280
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_05_31T18_45_22_0700
	                    minikube.k8s.io/version=v1.30.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-748280
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-748280"}
	                    kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 31 May 2023 18:45:18 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-748280
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 31 May 2023 18:49:57 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 31 May 2023 18:49:57 +0000   Wed, 31 May 2023 18:45:14 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 31 May 2023 18:49:57 +0000   Wed, 31 May 2023 18:45:14 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 31 May 2023 18:49:57 +0000   Wed, 31 May 2023 18:45:14 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 31 May 2023 18:49:57 +0000   Wed, 31 May 2023 18:46:09 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-748280
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022624Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022624Ki
	  pods:               110
	System Info:
	  Machine ID:                 f0fd9b7bb62247f4b43374d029771549
	  System UUID:                01389d32-e699-4c5e-890c-7ff02ae10f68
	  Boot ID:                    35429d0f-2ece-432d-a992-d9f8cda99d9c
	  Kernel Version:             5.15.0-1037-aws
	  OS Image:                   Ubuntu 22.04.2 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.5
	  Kubelet Version:            v1.27.2
	  Kube-Proxy Version:         v1.27.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (17 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-65bdb79f98-vk6p5             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17s
	  default                     nginx                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m36s
	  gcp-auth                    gcp-auth-58478865f7-tw9f8                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m18s
	  ingress-nginx               ingress-nginx-controller-858bcd4f57-76948    100m (5%!)(MISSING)     0 (0%!)(MISSING)      90Mi (1%!)(MISSING)        0 (0%!)(MISSING)         4m24s
	  kube-system                 coredns-5d78c9869d-ctb4p                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     4m29s
	  kube-system                 csi-hostpath-attacher-0                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m24s
	  kube-system                 csi-hostpath-resizer-0                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m23s
	  kube-system                 csi-hostpathplugin-9s7pv                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m55s
	  kube-system                 etcd-addons-748280                           100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         4m43s
	  kube-system                 kindnet-265l5                                100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      4m30s
	  kube-system                 kube-apiserver-addons-748280                 250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m42s
	  kube-system                 kube-controller-manager-addons-748280        200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m44s
	  kube-system                 kube-proxy-k8k6d                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m30s
	  kube-system                 kube-scheduler-addons-748280                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m44s
	  kube-system                 snapshot-controller-75bbb956b9-b664t         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m24s
	  kube-system                 snapshot-controller-75bbb956b9-l8h9p         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m24s
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m24s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%!)(MISSING)  100m (5%!)(MISSING)
	  memory             310Mi (3%!)(MISSING)  220Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m23s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  4m51s (x8 over 4m51s)  kubelet          Node addons-748280 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m51s (x8 over 4m51s)  kubelet          Node addons-748280 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m51s (x8 over 4m51s)  kubelet          Node addons-748280 status is now: NodeHasSufficientPID
	  Normal  Starting                 4m43s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m43s                  kubelet          Node addons-748280 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m43s                  kubelet          Node addons-748280 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m43s                  kubelet          Node addons-748280 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           4m31s                  node-controller  Node addons-748280 event: Registered Node addons-748280 in Controller
	  Normal  NodeReady                3m55s                  kubelet          Node addons-748280 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [May31 18:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014643] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +1.239601] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.408914] kauditd_printk_skb: 26 callbacks suppressed
	
	* 
	* ==> etcd [ab2df087bab58700b57612f6f58877865e5caf7ca986cb74a316f1322c67b0c1] <==
	* {"level":"info","ts":"2023-05-31T18:45:14.359Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-05-31T18:45:14.360Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2023-05-31T18:45:14.360Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-05-31T18:45:14.360Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-05-31T18:45:14.370Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-05-31T18:45:14.390Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-05-31T18:45:14.390Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-05-31T18:45:35.501Z","caller":"traceutil/trace.go:171","msg":"trace[2077223569] transaction","detail":"{read_only:false; response_revision:375; number_of_response:1; }","duration":"289.42364ms","start":"2023-05-31T18:45:35.212Z","end":"2023-05-31T18:45:35.501Z","steps":["trace[2077223569] 'process raft request'  (duration: 279.185869ms)"],"step_count":1}
	{"level":"info","ts":"2023-05-31T18:45:35.504Z","caller":"traceutil/trace.go:171","msg":"trace[1356551414] transaction","detail":"{read_only:false; response_revision:376; number_of_response:1; }","duration":"131.286992ms","start":"2023-05-31T18:45:35.373Z","end":"2023-05-31T18:45:35.504Z","steps":["trace[1356551414] 'process raft request'  (duration: 130.988172ms)"],"step_count":1}
	{"level":"info","ts":"2023-05-31T18:45:35.505Z","caller":"traceutil/trace.go:171","msg":"trace[1586516997] transaction","detail":"{read_only:false; response_revision:377; number_of_response:1; }","duration":"117.759206ms","start":"2023-05-31T18:45:35.387Z","end":"2023-05-31T18:45:35.505Z","steps":["trace[1586516997] 'process raft request'  (duration: 117.354984ms)"],"step_count":1}
	{"level":"info","ts":"2023-05-31T18:45:35.505Z","caller":"traceutil/trace.go:171","msg":"trace[314550480] transaction","detail":"{read_only:false; response_revision:378; number_of_response:1; }","duration":"115.594424ms","start":"2023-05-31T18:45:35.389Z","end":"2023-05-31T18:45:35.505Z","steps":["trace[314550480] 'process raft request'  (duration: 115.019471ms)"],"step_count":1}
	{"level":"info","ts":"2023-05-31T18:45:35.507Z","caller":"traceutil/trace.go:171","msg":"trace[1362284431] transaction","detail":"{read_only:false; response_revision:379; number_of_response:1; }","duration":"117.257844ms","start":"2023-05-31T18:45:35.389Z","end":"2023-05-31T18:45:35.507Z","steps":["trace[1362284431] 'process raft request'  (duration: 114.86456ms)"],"step_count":1}
	{"level":"warn","ts":"2023-05-31T18:45:38.038Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"100.828422ms","expected-duration":"100ms","prefix":"","request":"header:<ID:8128021455551041720 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/replicasets/default/cloud-spanner-emulator-6964794569\" mod_revision:0 > success:<request_put:<key:\"/registry/replicasets/default/cloud-spanner-emulator-6964794569\" value_size:1985 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2023-05-31T18:45:38.042Z","caller":"traceutil/trace.go:171","msg":"trace[443635447] transaction","detail":"{read_only:false; response_revision:406; number_of_response:1; }","duration":"158.535949ms","start":"2023-05-31T18:45:37.884Z","end":"2023-05-31T18:45:38.042Z","steps":["trace[443635447] 'process raft request'  (duration: 52.86102ms)","trace[443635447] 'compare'  (duration: 95.989235ms)"],"step_count":2}
	{"level":"info","ts":"2023-05-31T18:45:38.043Z","caller":"traceutil/trace.go:171","msg":"trace[885329116] linearizableReadLoop","detail":"{readStateIndex:420; appliedIndex:419; }","duration":"158.52886ms","start":"2023-05-31T18:45:37.884Z","end":"2023-05-31T18:45:38.043Z","steps":["trace[885329116] 'read index received'  (duration: 52.42388ms)","trace[885329116] 'applied index is now lower than readState.Index'  (duration: 106.104143ms)"],"step_count":2}
	{"level":"warn","ts":"2023-05-31T18:45:38.050Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"165.621345ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/apiextensions.k8s.io/customresourcedefinitions/\" range_end:\"/registry/apiextensions.k8s.io/customresourcedefinitions0\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-05-31T18:45:38.051Z","caller":"traceutil/trace.go:171","msg":"trace[333068569] range","detail":"{range_begin:/registry/apiextensions.k8s.io/customresourcedefinitions/; range_end:/registry/apiextensions.k8s.io/customresourcedefinitions0; response_count:0; response_revision:406; }","duration":"167.12141ms","start":"2023-05-31T18:45:37.884Z","end":"2023-05-31T18:45:38.051Z","steps":["trace[333068569] 'agreement among raft nodes before linearized reading'  (duration: 165.156627ms)"],"step_count":1}
	{"level":"info","ts":"2023-05-31T18:45:38.053Z","caller":"traceutil/trace.go:171","msg":"trace[2099997861] transaction","detail":"{read_only:false; response_revision:407; number_of_response:1; }","duration":"115.544522ms","start":"2023-05-31T18:45:37.937Z","end":"2023-05-31T18:45:38.053Z","steps":["trace[2099997861] 'process raft request'  (duration: 105.338562ms)"],"step_count":1}
	{"level":"info","ts":"2023-05-31T18:45:38.066Z","caller":"traceutil/trace.go:171","msg":"trace[628386089] transaction","detail":"{read_only:false; response_revision:408; number_of_response:1; }","duration":"128.206321ms","start":"2023-05-31T18:45:37.938Z","end":"2023-05-31T18:45:38.066Z","steps":["trace[628386089] 'process raft request'  (duration: 114.623791ms)"],"step_count":1}
	{"level":"warn","ts":"2023-05-31T18:45:38.216Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"106.955755ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/ranges/serviceips\" ","response":"range_response_count:1 size:116"}
	{"level":"info","ts":"2023-05-31T18:45:38.216Z","caller":"traceutil/trace.go:171","msg":"trace[542074782] range","detail":"{range_begin:/registry/ranges/serviceips; range_end:; response_count:1; response_revision:411; }","duration":"107.035163ms","start":"2023-05-31T18:45:38.109Z","end":"2023-05-31T18:45:38.216Z","steps":["trace[542074782] 'agreement among raft nodes before linearized reading'  (duration: 106.865926ms)"],"step_count":1}
	{"level":"info","ts":"2023-05-31T18:45:38.217Z","caller":"traceutil/trace.go:171","msg":"trace[1712723025] transaction","detail":"{read_only:false; response_revision:411; number_of_response:1; }","duration":"107.487213ms","start":"2023-05-31T18:45:38.109Z","end":"2023-05-31T18:45:38.217Z","steps":["trace[1712723025] 'process raft request'  (duration: 38.56973ms)","trace[1712723025] 'compare'  (duration: 68.378378ms)"],"step_count":2}
	{"level":"info","ts":"2023-05-31T18:45:38.441Z","caller":"traceutil/trace.go:171","msg":"trace[5844250] transaction","detail":"{read_only:false; response_revision:415; number_of_response:1; }","duration":"153.985113ms","start":"2023-05-31T18:45:38.287Z","end":"2023-05-31T18:45:38.441Z","steps":["trace[5844250] 'process raft request'  (duration: 153.344519ms)"],"step_count":1}
	{"level":"info","ts":"2023-05-31T18:45:38.442Z","caller":"traceutil/trace.go:171","msg":"trace[1111194169] transaction","detail":"{read_only:false; response_revision:414; number_of_response:1; }","duration":"154.62674ms","start":"2023-05-31T18:45:38.287Z","end":"2023-05-31T18:45:38.442Z","steps":["trace[1111194169] 'process raft request'  (duration: 141.402836ms)","trace[1111194169] 'compare'  (duration: 11.64516ms)"],"step_count":2}
	{"level":"info","ts":"2023-05-31T18:45:38.499Z","caller":"traceutil/trace.go:171","msg":"trace[1078383080] transaction","detail":"{read_only:false; response_revision:417; number_of_response:1; }","duration":"115.21646ms","start":"2023-05-31T18:45:38.383Z","end":"2023-05-31T18:45:38.499Z","steps":["trace[1078383080] 'process raft request'  (duration: 67.6936ms)"],"step_count":1}
	
	* 
	* ==> gcp-auth [e045b0df91edc70792695e93ec373cac43b8d968e888050834ff77a8494fa801] <==
	* 2023/05/31 18:46:55 GCP Auth Webhook started!
	2023/05/31 18:47:15 Ready to marshal response ...
	2023/05/31 18:47:15 Ready to write response ...
	2023/05/31 18:47:28 Ready to marshal response ...
	2023/05/31 18:47:28 Ready to write response ...
	2023/05/31 18:49:47 Ready to marshal response ...
	2023/05/31 18:49:47 Ready to write response ...
	
	* 
	* ==> kernel <==
	*  18:50:04 up 32 min,  0 users,  load average: 0.29, 0.91, 0.52
	Linux addons-748280 5.15.0-1037-aws #41~20.04.1-Ubuntu SMP Mon May 22 18:20:20 UTC 2023 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.2 LTS"
	
	* 
	* ==> kindnet [7101bc75e91f244083f5e8fbbbea1a1dfaac460ae9023958afc3251cc0a02ea9] <==
	* I0531 18:47:58.909021       1 main.go:227] handling current node
	I0531 18:48:08.913802       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0531 18:48:08.913832       1 main.go:227] handling current node
	I0531 18:48:18.917796       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0531 18:48:18.917827       1 main.go:227] handling current node
	I0531 18:48:28.921666       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0531 18:48:28.921695       1 main.go:227] handling current node
	I0531 18:48:38.931804       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0531 18:48:38.931833       1 main.go:227] handling current node
	I0531 18:48:48.942574       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0531 18:48:48.942602       1 main.go:227] handling current node
	I0531 18:48:58.955122       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0531 18:48:58.955145       1 main.go:227] handling current node
	I0531 18:49:08.959378       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0531 18:49:08.959408       1 main.go:227] handling current node
	I0531 18:49:18.970550       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0531 18:49:18.970576       1 main.go:227] handling current node
	I0531 18:49:28.982344       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0531 18:49:28.982376       1 main.go:227] handling current node
	I0531 18:49:38.986403       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0531 18:49:38.986432       1 main.go:227] handling current node
	I0531 18:49:48.998465       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0531 18:49:48.998492       1 main.go:227] handling current node
	I0531 18:49:59.006922       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0531 18:49:59.006954       1 main.go:227] handling current node
	
	* 
	* ==> kube-apiserver [b2472317bc595f939b64447cf1f262c4adf9aebe969c91e002ddaa9571020a29] <==
	* E0531 18:46:09.311319       1 dispatcher.go:206] failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.111.7.220:443: connect: connection refused
	W0531 18:46:09.312285       1 dispatcher.go:202] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.111.7.220:443: connect: connection refused
	E0531 18:46:09.312375       1 dispatcher.go:206] failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.111.7.220:443: connect: connection refused
	I0531 18:46:16.097664       1 handler_discovery.go:325] DiscoveryManager: Failed to download discovery for kube-system/metrics-server:443: 503 error trying to reach service: dial tcp 10.100.179.33:443: connect: connection refused
	I0531 18:46:16.097748       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E0531 18:46:16.100161       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.100.179.33:443/apis/metrics.k8s.io/v1beta1: Get "https://10.100.179.33:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.100.179.33:443: connect: connection refused
	I0531 18:46:16.211630       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0531 18:46:18.114777       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0531 18:47:17.173788       1 controller.go:132] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	E0531 18:47:17.190004       1 handler_proxy.go:144] error resolving kube-system/metrics-server: service "metrics-server" not found
	W0531 18:47:17.190036       1 handler_proxy.go:100] no RequestInfo found in the context
	E0531 18:47:17.190078       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0531 18:47:17.190086       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0531 18:47:21.710027       1 handler.go:232] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	I0531 18:47:21.747025       1 handler.go:232] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0531 18:47:22.802016       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0531 18:47:28.006633       1 controller.go:624] quota admission added evaluator for: ingresses.networking.k8s.io
	I0531 18:47:28.447126       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs=map[IPv4:10.106.134.191]
	E0531 18:48:17.190516       1 handler_proxy.go:144] error resolving kube-system/metrics-server: service "metrics-server" not found
	W0531 18:48:17.190544       1 handler_proxy.go:100] no RequestInfo found in the context
	E0531 18:48:17.190589       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0531 18:48:17.190601       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0531 18:49:47.955022       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs=map[IPv4:10.107.203.236]
	
	* 
	* ==> kube-controller-manager [12917d226c926cb6e809dd8b7aa8859740561c32bbb851945bfa4aaa99a74f3d] <==
	* I0531 18:46:59.080743       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-create
	I0531 18:47:01.019636       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-patch
	I0531 18:47:01.086804       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-patch
	E0531 18:47:22.803932       1 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
	W0531 18:47:24.195196       1 reflector.go:533] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0531 18:47:24.195249       1 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0531 18:47:26.311864       1 reflector.go:533] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0531 18:47:26.311898       1 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0531 18:47:31.532528       1 reflector.go:533] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0531 18:47:31.532582       1 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0531 18:47:31.815009       1 namespace_controller.go:182] "Namespace has been deleted" namespace="gadget"
	I0531 18:47:34.127216       1 shared_informer.go:311] Waiting for caches to sync for resource quota
	I0531 18:47:34.127253       1 shared_informer.go:318] Caches are synced for resource quota
	I0531 18:47:34.497089       1 shared_informer.go:311] Waiting for caches to sync for garbage collector
	I0531 18:47:34.497142       1 shared_informer.go:318] Caches are synced for garbage collector
	W0531 18:47:41.078475       1 reflector.go:533] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0531 18:47:41.078509       1 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0531 18:48:03.377690       1 reflector.go:533] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0531 18:48:03.377818       1 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0531 18:48:42.748225       1 reflector.go:533] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0531 18:48:42.748260       1 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0531 18:49:24.269000       1 reflector.go:533] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0531 18:49:24.269131       1 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0531 18:49:47.640190       1 event.go:307] "Event occurred" object="default/hello-world-app" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set hello-world-app-65bdb79f98 to 1"
	I0531 18:49:47.681005       1 event.go:307] "Event occurred" object="default/hello-world-app-65bdb79f98" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: hello-world-app-65bdb79f98-vk6p5"
	
	* 
	* ==> kube-proxy [6a504f33e4da1ef97d3eac91b1c1ebb5230daefb710a550c1682f472101c1723] <==
	* I0531 18:45:40.167202       1 node.go:141] Successfully retrieved node IP: 192.168.49.2
	I0531 18:45:40.191872       1 server_others.go:110] "Detected node IP" address="192.168.49.2"
	I0531 18:45:40.192624       1 server_others.go:551] "Using iptables proxy"
	I0531 18:45:40.366606       1 server_others.go:190] "Using iptables Proxier"
	I0531 18:45:40.366723       1 server_others.go:197] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0531 18:45:40.366792       1 server_others.go:198] "Creating dualStackProxier for iptables"
	I0531 18:45:40.366831       1 server_others.go:481] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, defaulting to no-op detect-local for IPv6"
	I0531 18:45:40.366917       1 proxier.go:253] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0531 18:45:40.367585       1 server.go:657] "Version info" version="v1.27.2"
	I0531 18:45:40.367842       1 server.go:659] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0531 18:45:40.368684       1 config.go:188] "Starting service config controller"
	I0531 18:45:40.368792       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0531 18:45:40.368854       1 config.go:97] "Starting endpoint slice config controller"
	I0531 18:45:40.368900       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0531 18:45:40.369510       1 config.go:315] "Starting node config controller"
	I0531 18:45:40.370382       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0531 18:45:40.471004       1 shared_informer.go:318] Caches are synced for node config
	I0531 18:45:40.472771       1 shared_informer.go:318] Caches are synced for service config
	I0531 18:45:40.472800       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	* 
	* ==> kube-scheduler [622e30ad34794be09d941ef4c5989fb69f9afc992fb535d2fa37f71359c8e0ed] <==
	* W0531 18:45:18.412195       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0531 18:45:18.415387       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0531 18:45:18.412248       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0531 18:45:18.415454       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0531 18:45:18.412323       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0531 18:45:18.415518       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0531 18:45:18.412394       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0531 18:45:18.415589       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0531 18:45:18.412428       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0531 18:45:18.415665       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0531 18:45:18.412485       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0531 18:45:18.415730       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0531 18:45:19.280880       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0531 18:45:19.280917       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0531 18:45:19.298490       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0531 18:45:19.298609       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0531 18:45:19.299701       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0531 18:45:19.299788       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0531 18:45:19.325013       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0531 18:45:19.325110       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0531 18:45:19.337529       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0531 18:45:19.337631       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0531 18:45:19.371572       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0531 18:45:19.371702       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0531 18:45:19.880697       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* May 31 18:49:52 addons-748280 kubelet[1356]: I0531 18:49:52.345034    1356 scope.go:115] "RemoveContainer" containerID="2731786b1cbb356692d60139dee97f50579ae2cbe2664d0a36f2fe862d9c476d"
	May 31 18:49:52 addons-748280 kubelet[1356]: I0531 18:49:52.345315    1356 scope.go:115] "RemoveContainer" containerID="4a2cbe80304edd4ad97fc3f4b969c445e84300b47b8690a3c8e492a701ee5ee8"
	May 31 18:49:52 addons-748280 kubelet[1356]: E0531 18:49:52.345590    1356 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"hello-world-app\" with CrashLoopBackOff: \"back-off 10s restarting failed container=hello-world-app pod=hello-world-app-65bdb79f98-vk6p5_default(bd9738eb-d820-46de-87a2-693ad177d3e3)\"" pod="default/hello-world-app-65bdb79f98-vk6p5" podUID=bd9738eb-d820-46de-87a2-693ad177d3e3
	May 31 18:49:53 addons-748280 kubelet[1356]: I0531 18:49:53.348383    1356 scope.go:115] "RemoveContainer" containerID="4a2cbe80304edd4ad97fc3f4b969c445e84300b47b8690a3c8e492a701ee5ee8"
	May 31 18:49:53 addons-748280 kubelet[1356]: E0531 18:49:53.348658    1356 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"hello-world-app\" with CrashLoopBackOff: \"back-off 10s restarting failed container=hello-world-app pod=hello-world-app-65bdb79f98-vk6p5_default(bd9738eb-d820-46de-87a2-693ad177d3e3)\"" pod="default/hello-world-app-65bdb79f98-vk6p5" podUID=bd9738eb-d820-46de-87a2-693ad177d3e3
	May 31 18:49:58 addons-748280 kubelet[1356]: W0531 18:49:58.715357    1356 container.go:586] Failed to update stats for container "/crio/crio-3a4d54a331ed56b17482ced945b89d2e718c26b8344d43b3eb5e546fdce44674": unable to determine device info for dir: /var/lib/containers/storage/overlay/865968ae8bddea6b85a61a865169ef34cf51944bad6d7495727b410385854663/diff: stat failed on /var/lib/containers/storage/overlay/865968ae8bddea6b85a61a865169ef34cf51944bad6d7495727b410385854663/diff with error: no such file or directory, continuing to push stats
	May 31 18:49:58 addons-748280 kubelet[1356]: W0531 18:49:58.896857    1356 container.go:586] Failed to update stats for container "/crio/crio-9ad9aa9b3ef60dd9ed1124eac48de82ca24f60559c1102019f9694de97c9e630": unable to determine device info for dir: /var/lib/containers/storage/overlay/307ab740cace8f81ff36580590af7a775d1f24d9e0c8905e48cd1733aadfb885/diff: stat failed on /var/lib/containers/storage/overlay/307ab740cace8f81ff36580590af7a775d1f24d9e0c8905e48cd1733aadfb885/diff with error: no such file or directory, continuing to push stats
	May 31 18:49:59 addons-748280 kubelet[1356]: W0531 18:49:59.167643    1356 container.go:586] Failed to update stats for container "/docker/7a58041a0a26da39225fc1484cf566703381ab580d380a421227d99124b51cb9/crio/crio-0b224cc808dfab8a94a86be70d4f96dec7b45ddaaf2b67f2a78a8fc3cac4b3f3": unable to determine device info for dir: /var/lib/containers/storage/overlay/df920cafb1829c929c1ec6908e3be0db1223038e031bec54284193dbea9dc0da/diff: stat failed on /var/lib/containers/storage/overlay/df920cafb1829c929c1ec6908e3be0db1223038e031bec54284193dbea9dc0da/diff with error: no such file or directory, continuing to push stats
	May 31 18:50:02 addons-748280 kubelet[1356]: I0531 18:50:02.548420    1356 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7s2rm\" (UniqueName: \"kubernetes.io/projected/6510a1f0-5ba2-49e1-8749-6a1b8101c599-kube-api-access-7s2rm\") pod \"6510a1f0-5ba2-49e1-8749-6a1b8101c599\" (UID: \"6510a1f0-5ba2-49e1-8749-6a1b8101c599\") "
	May 31 18:50:02 addons-748280 kubelet[1356]: I0531 18:50:02.548476    1356 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n8p5k\" (UniqueName: \"kubernetes.io/projected/98c2e1ee-6d1b-4140-a410-92c62d5b0c8e-kube-api-access-n8p5k\") pod \"98c2e1ee-6d1b-4140-a410-92c62d5b0c8e\" (UID: \"98c2e1ee-6d1b-4140-a410-92c62d5b0c8e\") "
	May 31 18:50:02 addons-748280 kubelet[1356]: I0531 18:50:02.554969    1356 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/98c2e1ee-6d1b-4140-a410-92c62d5b0c8e-kube-api-access-n8p5k" (OuterVolumeSpecName: "kube-api-access-n8p5k") pod "98c2e1ee-6d1b-4140-a410-92c62d5b0c8e" (UID: "98c2e1ee-6d1b-4140-a410-92c62d5b0c8e"). InnerVolumeSpecName "kube-api-access-n8p5k". PluginName "kubernetes.io/projected", VolumeGidValue ""
	May 31 18:50:02 addons-748280 kubelet[1356]: I0531 18:50:02.556868    1356 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6510a1f0-5ba2-49e1-8749-6a1b8101c599-kube-api-access-7s2rm" (OuterVolumeSpecName: "kube-api-access-7s2rm") pod "6510a1f0-5ba2-49e1-8749-6a1b8101c599" (UID: "6510a1f0-5ba2-49e1-8749-6a1b8101c599"). InnerVolumeSpecName "kube-api-access-7s2rm". PluginName "kubernetes.io/projected", VolumeGidValue ""
	May 31 18:50:02 addons-748280 kubelet[1356]: I0531 18:50:02.649247    1356 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-7s2rm\" (UniqueName: \"kubernetes.io/projected/6510a1f0-5ba2-49e1-8749-6a1b8101c599-kube-api-access-7s2rm\") on node \"addons-748280\" DevicePath \"\""
	May 31 18:50:02 addons-748280 kubelet[1356]: I0531 18:50:02.649286    1356 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-n8p5k\" (UniqueName: \"kubernetes.io/projected/98c2e1ee-6d1b-4140-a410-92c62d5b0c8e-kube-api-access-n8p5k\") on node \"addons-748280\" DevicePath \"\""
	May 31 18:50:03 addons-748280 kubelet[1356]: I0531 18:50:03.371723    1356 scope.go:115] "RemoveContainer" containerID="3072a2ee81fb385d8c0298d94988ddc5cbe068244de65bcf4b11e50d8a5dd38b"
	May 31 18:50:03 addons-748280 kubelet[1356]: I0531 18:50:03.409461    1356 scope.go:115] "RemoveContainer" containerID="c3bcde53d394e704dc2e4f16505d7e1a5f1ed0ce0180031f9792e3dc9320f842"
	May 31 18:50:03 addons-748280 kubelet[1356]: I0531 18:50:03.462520    1356 scope.go:115] "RemoveContainer" containerID="c3bcde53d394e704dc2e4f16505d7e1a5f1ed0ce0180031f9792e3dc9320f842"
	May 31 18:50:03 addons-748280 kubelet[1356]: E0531 18:50:03.463335    1356 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c3bcde53d394e704dc2e4f16505d7e1a5f1ed0ce0180031f9792e3dc9320f842\": container with ID starting with c3bcde53d394e704dc2e4f16505d7e1a5f1ed0ce0180031f9792e3dc9320f842 not found: ID does not exist" containerID="c3bcde53d394e704dc2e4f16505d7e1a5f1ed0ce0180031f9792e3dc9320f842"
	May 31 18:50:03 addons-748280 kubelet[1356]: I0531 18:50:03.463382    1356 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:cri-o ID:c3bcde53d394e704dc2e4f16505d7e1a5f1ed0ce0180031f9792e3dc9320f842} err="failed to get container status \"c3bcde53d394e704dc2e4f16505d7e1a5f1ed0ce0180031f9792e3dc9320f842\": rpc error: code = NotFound desc = could not find container \"c3bcde53d394e704dc2e4f16505d7e1a5f1ed0ce0180031f9792e3dc9320f842\": container with ID starting with c3bcde53d394e704dc2e4f16505d7e1a5f1ed0ce0180031f9792e3dc9320f842 not found: ID does not exist"
	May 31 18:50:03 addons-748280 kubelet[1356]: I0531 18:50:03.658082    1356 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q8nds\" (UniqueName: \"kubernetes.io/projected/60b6737d-85a4-4be3-a343-c649a32d5573-kube-api-access-q8nds\") pod \"60b6737d-85a4-4be3-a343-c649a32d5573\" (UID: \"60b6737d-85a4-4be3-a343-c649a32d5573\") "
	May 31 18:50:03 addons-748280 kubelet[1356]: I0531 18:50:03.669715    1356 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/60b6737d-85a4-4be3-a343-c649a32d5573-kube-api-access-q8nds" (OuterVolumeSpecName: "kube-api-access-q8nds") pod "60b6737d-85a4-4be3-a343-c649a32d5573" (UID: "60b6737d-85a4-4be3-a343-c649a32d5573"). InnerVolumeSpecName "kube-api-access-q8nds". PluginName "kubernetes.io/projected", VolumeGidValue ""
	May 31 18:50:03 addons-748280 kubelet[1356]: I0531 18:50:03.680481    1356 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID=6510a1f0-5ba2-49e1-8749-6a1b8101c599 path="/var/lib/kubelet/pods/6510a1f0-5ba2-49e1-8749-6a1b8101c599/volumes"
	May 31 18:50:03 addons-748280 kubelet[1356]: I0531 18:50:03.680953    1356 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID=98c2e1ee-6d1b-4140-a410-92c62d5b0c8e path="/var/lib/kubelet/pods/98c2e1ee-6d1b-4140-a410-92c62d5b0c8e/volumes"
	May 31 18:50:03 addons-748280 kubelet[1356]: I0531 18:50:03.758489    1356 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-q8nds\" (UniqueName: \"kubernetes.io/projected/60b6737d-85a4-4be3-a343-c649a32d5573-kube-api-access-q8nds\") on node \"addons-748280\" DevicePath \"\""
	May 31 18:50:04 addons-748280 kubelet[1356]: I0531 18:50:04.379545    1356 scope.go:115] "RemoveContainer" containerID="12bcf75c0158a767526eab88b9d785a8d7f2ab9be0e4ce51d9e170c475a61438"
	
	* 
	* ==> storage-provisioner [2b333ed8ee445024a127328209074caea001d936536b386c50866ea28a97614e] <==
	* I0531 18:46:10.183687       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0531 18:46:10.197858       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0531 18:46:10.197937       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0531 18:46:10.207243       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0531 18:46:10.207509       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-748280_0c5281de-a47e-49ae-ac9f-f8755360573b!
	I0531 18:46:10.208493       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"b56dbde0-2bf3-41e0-98ff-a56fe3b3072e", APIVersion:"v1", ResourceVersion:"847", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-748280_0c5281de-a47e-49ae-ac9f-f8755360573b became leader
	I0531 18:46:10.307914       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-748280_0c5281de-a47e-49ae-ac9f-f8755360573b!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-748280 -n addons-748280
helpers_test.go:261: (dbg) Run:  kubectl --context addons-748280 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/Registry FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Registry (180.22s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (168.15s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:183: (dbg) Run:  kubectl --context addons-748280 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:208: (dbg) Run:  kubectl --context addons-748280 replace --force -f testdata/nginx-ingress-v1.yaml
2023/05/31 18:47:27 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2023/05/31 18:47:27 [DEBUG] GET http://192.168.49.2:5000: retrying in 8s (1 left)
addons_test.go:221: (dbg) Run:  kubectl --context addons-748280 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:226: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [bd9ebae2-f0e2-4d6f-82e6-133e48b8180b] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [bd9ebae2-f0e2-4d6f-82e6-133e48b8180b] Running
2023/05/31 18:47:35 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2023/05/31 18:47:35 [DEBUG] GET http://192.168.49.2:5000
2023/05/31 18:47:35 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2023/05/31 18:47:35 [DEBUG] GET http://192.168.49.2:5000: retrying in 1s (4 left)
2023/05/31 18:47:36 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2023/05/31 18:47:36 [DEBUG] GET http://192.168.49.2:5000: retrying in 2s (3 left)
addons_test.go:226: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 9.010015423s
addons_test.go:238: (dbg) Run:  out/minikube-linux-arm64 -p addons-748280 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
2023/05/31 18:47:38 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2023/05/31 18:47:38 [DEBUG] GET http://192.168.49.2:5000: retrying in 4s (2 left)
2023/05/31 18:47:42 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2023/05/31 18:47:42 [DEBUG] GET http://192.168.49.2:5000: retrying in 8s (1 left)
2023/05/31 18:47:50 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2023/05/31 18:47:51 [DEBUG] GET http://192.168.49.2:5000
2023/05/31 18:47:51 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2023/05/31 18:47:51 [DEBUG] GET http://192.168.49.2:5000: retrying in 1s (4 left)
2023/05/31 18:47:52 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2023/05/31 18:47:52 [DEBUG] GET http://192.168.49.2:5000: retrying in 2s (3 left)
2023/05/31 18:47:54 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2023/05/31 18:47:54 [DEBUG] GET http://192.168.49.2:5000: retrying in 4s (2 left)
2023/05/31 18:47:58 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2023/05/31 18:47:58 [DEBUG] GET http://192.168.49.2:5000: retrying in 8s (1 left)
2023/05/31 18:48:06 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2023/05/31 18:48:08 [DEBUG] GET http://192.168.49.2:5000
2023/05/31 18:48:08 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2023/05/31 18:48:08 [DEBUG] GET http://192.168.49.2:5000: retrying in 1s (4 left)
2023/05/31 18:48:09 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2023/05/31 18:48:09 [DEBUG] GET http://192.168.49.2:5000: retrying in 2s (3 left)
2023/05/31 18:48:11 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2023/05/31 18:48:11 [DEBUG] GET http://192.168.49.2:5000: retrying in 4s (2 left)
2023/05/31 18:48:15 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2023/05/31 18:48:15 [DEBUG] GET http://192.168.49.2:5000: retrying in 8s (1 left)
2023/05/31 18:48:23 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2023/05/31 18:48:24 [DEBUG] GET http://192.168.49.2:5000
2023/05/31 18:48:24 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2023/05/31 18:48:24 [DEBUG] GET http://192.168.49.2:5000: retrying in 1s (4 left)
2023/05/31 18:48:25 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2023/05/31 18:48:25 [DEBUG] GET http://192.168.49.2:5000: retrying in 2s (3 left)
2023/05/31 18:48:27 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2023/05/31 18:48:27 [DEBUG] GET http://192.168.49.2:5000: retrying in 4s (2 left)
2023/05/31 18:48:31 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2023/05/31 18:48:31 [DEBUG] GET http://192.168.49.2:5000: retrying in 8s (1 left)
2023/05/31 18:48:39 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2023/05/31 18:48:42 [DEBUG] GET http://192.168.49.2:5000
2023/05/31 18:48:42 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2023/05/31 18:48:42 [DEBUG] GET http://192.168.49.2:5000: retrying in 1s (4 left)
2023/05/31 18:48:43 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2023/05/31 18:48:43 [DEBUG] GET http://192.168.49.2:5000: retrying in 2s (3 left)
2023/05/31 18:48:45 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2023/05/31 18:48:45 [DEBUG] GET http://192.168.49.2:5000: retrying in 4s (2 left)
2023/05/31 18:48:49 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2023/05/31 18:48:49 [DEBUG] GET http://192.168.49.2:5000: retrying in 8s (1 left)
2023/05/31 18:48:57 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2023/05/31 18:49:00 [DEBUG] GET http://192.168.49.2:5000
2023/05/31 18:49:00 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2023/05/31 18:49:00 [DEBUG] GET http://192.168.49.2:5000: retrying in 1s (4 left)
2023/05/31 18:49:01 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2023/05/31 18:49:01 [DEBUG] GET http://192.168.49.2:5000: retrying in 2s (3 left)
2023/05/31 18:49:03 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2023/05/31 18:49:03 [DEBUG] GET http://192.168.49.2:5000: retrying in 4s (2 left)
2023/05/31 18:49:07 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2023/05/31 18:49:07 [DEBUG] GET http://192.168.49.2:5000: retrying in 8s (1 left)
2023/05/31 18:49:15 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2023/05/31 18:49:19 [DEBUG] GET http://192.168.49.2:5000
2023/05/31 18:49:19 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2023/05/31 18:49:19 [DEBUG] GET http://192.168.49.2:5000: retrying in 1s (4 left)
2023/05/31 18:49:20 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2023/05/31 18:49:20 [DEBUG] GET http://192.168.49.2:5000: retrying in 2s (3 left)
2023/05/31 18:49:22 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2023/05/31 18:49:22 [DEBUG] GET http://192.168.49.2:5000: retrying in 4s (2 left)
2023/05/31 18:49:26 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2023/05/31 18:49:26 [DEBUG] GET http://192.168.49.2:5000: retrying in 8s (1 left)
2023/05/31 18:49:34 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2023/05/31 18:49:46 [DEBUG] GET http://192.168.49.2:5000
2023/05/31 18:49:46 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2023/05/31 18:49:46 [DEBUG] GET http://192.168.49.2:5000: retrying in 1s (4 left)
addons_test.go:238: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-748280 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m9.916749342s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:254: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:262: (dbg) Run:  kubectl --context addons-748280 replace --force -f testdata/ingress-dns-example-v1.yaml
2023/05/31 18:49:47 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2023/05/31 18:49:47 [DEBUG] GET http://192.168.49.2:5000: retrying in 2s (3 left)
addons_test.go:267: (dbg) Run:  out/minikube-linux-arm64 -p addons-748280 ip
addons_test.go:273: (dbg) Run:  nslookup hello-john.test 192.168.49.2
2023/05/31 18:49:49 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2023/05/31 18:49:49 [DEBUG] GET http://192.168.49.2:5000: retrying in 4s (2 left)
2023/05/31 18:49:53 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2023/05/31 18:49:53 [DEBUG] GET http://192.168.49.2:5000: retrying in 8s (1 left)
2023/05/31 18:50:01 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
addons_test.go:273: (dbg) Non-zero exit: nslookup hello-john.test 192.168.49.2: exit status 1 (15.051690381s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
addons_test.go:275: failed to nslookup hello-john.test host. args "nslookup hello-john.test 192.168.49.2" : exit status 1
addons_test.go:279: unexpected output from nslookup. stdout: ;; connection timed out; no servers could be reached

                                                
                                                

                                                
                                                

                                                
                                                
stderr: 
addons_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p addons-748280 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p addons-748280 addons disable ingress-dns --alsologtostderr -v=1: (1.22777511s)
addons_test.go:287: (dbg) Run:  out/minikube-linux-arm64 -p addons-748280 addons disable ingress --alsologtostderr -v=1
addons_test.go:287: (dbg) Done: out/minikube-linux-arm64 -p addons-748280 addons disable ingress --alsologtostderr -v=1: (7.747693483s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-748280
helpers_test.go:235: (dbg) docker inspect addons-748280:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "7a58041a0a26da39225fc1484cf566703381ab580d380a421227d99124b51cb9",
	        "Created": "2023-05-31T18:44:57.223926968Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 8785,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-05-31T18:44:57.591241228Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:dee4774de4f99268e16c76379a36a4607bed47635a069a7e60c17cd24d9aaa76",
	        "ResolvConfPath": "/var/lib/docker/containers/7a58041a0a26da39225fc1484cf566703381ab580d380a421227d99124b51cb9/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/7a58041a0a26da39225fc1484cf566703381ab580d380a421227d99124b51cb9/hostname",
	        "HostsPath": "/var/lib/docker/containers/7a58041a0a26da39225fc1484cf566703381ab580d380a421227d99124b51cb9/hosts",
	        "LogPath": "/var/lib/docker/containers/7a58041a0a26da39225fc1484cf566703381ab580d380a421227d99124b51cb9/7a58041a0a26da39225fc1484cf566703381ab580d380a421227d99124b51cb9-json.log",
	        "Name": "/addons-748280",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-748280:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-748280",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/c904836411e908252a69d24d682ce2db18fd81888b93f4c037d25dd6cf4c1d34-init/diff:/var/lib/docker/overlay2/548bced7e749d102323bab71db162b075785f916e2a896d29f3adc2c3d7fbea8/diff",
	                "MergedDir": "/var/lib/docker/overlay2/c904836411e908252a69d24d682ce2db18fd81888b93f4c037d25dd6cf4c1d34/merged",
	                "UpperDir": "/var/lib/docker/overlay2/c904836411e908252a69d24d682ce2db18fd81888b93f4c037d25dd6cf4c1d34/diff",
	                "WorkDir": "/var/lib/docker/overlay2/c904836411e908252a69d24d682ce2db18fd81888b93f4c037d25dd6cf4c1d34/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-748280",
	                "Source": "/var/lib/docker/volumes/addons-748280/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-748280",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-748280",
	                "name.minikube.sigs.k8s.io": "addons-748280",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "a3812706bf440a07b0ac0dc7b60a1480023e8c8842b2635e8ad06f1df8b39603",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32772"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32771"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32768"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32770"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32769"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/a3812706bf44",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-748280": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "7a58041a0a26",
	                        "addons-748280"
	                    ],
	                    "NetworkID": "760d9ac68c2919cc41692d416f20a39b5774ce399f6df40f2bb0801afd196ee3",
	                    "EndpointID": "d7565ee75e21f65b3313c923380dd3d07fcb9d6db2fb265fe468a0fa8884e40d",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-748280 -n addons-748280
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p addons-748280 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p addons-748280 logs -n 25: (1.713684464s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-924367   | jenkins | v1.30.1 | 31 May 23 18:44 UTC |                     |
	|         | -p download-only-924367        |                        |         |         |                     |                     |
	|         | --force --alsologtostderr      |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0   |                        |         |         |                     |                     |
	|         | --container-runtime=crio       |                        |         |         |                     |                     |
	|         | --driver=docker                |                        |         |         |                     |                     |
	|         | --container-runtime=crio       |                        |         |         |                     |                     |
	| start   | -o=json --download-only        | download-only-924367   | jenkins | v1.30.1 | 31 May 23 18:44 UTC |                     |
	|         | -p download-only-924367        |                        |         |         |                     |                     |
	|         | --force --alsologtostderr      |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.27.2   |                        |         |         |                     |                     |
	|         | --container-runtime=crio       |                        |         |         |                     |                     |
	|         | --driver=docker                |                        |         |         |                     |                     |
	|         | --container-runtime=crio       |                        |         |         |                     |                     |
	| delete  | --all                          | minikube               | jenkins | v1.30.1 | 31 May 23 18:44 UTC | 31 May 23 18:44 UTC |
	| delete  | -p download-only-924367        | download-only-924367   | jenkins | v1.30.1 | 31 May 23 18:44 UTC | 31 May 23 18:44 UTC |
	| delete  | -p download-only-924367        | download-only-924367   | jenkins | v1.30.1 | 31 May 23 18:44 UTC | 31 May 23 18:44 UTC |
	| start   | --download-only -p             | download-docker-298073 | jenkins | v1.30.1 | 31 May 23 18:44 UTC |                     |
	|         | download-docker-298073         |                        |         |         |                     |                     |
	|         | --alsologtostderr              |                        |         |         |                     |                     |
	|         | --driver=docker                |                        |         |         |                     |                     |
	|         | --container-runtime=crio       |                        |         |         |                     |                     |
	| delete  | -p download-docker-298073      | download-docker-298073 | jenkins | v1.30.1 | 31 May 23 18:44 UTC | 31 May 23 18:44 UTC |
	| start   | --download-only -p             | binary-mirror-781489   | jenkins | v1.30.1 | 31 May 23 18:44 UTC |                     |
	|         | binary-mirror-781489           |                        |         |         |                     |                     |
	|         | --alsologtostderr              |                        |         |         |                     |                     |
	|         | --binary-mirror                |                        |         |         |                     |                     |
	|         | http://127.0.0.1:45143         |                        |         |         |                     |                     |
	|         | --driver=docker                |                        |         |         |                     |                     |
	|         | --container-runtime=crio       |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-781489        | binary-mirror-781489   | jenkins | v1.30.1 | 31 May 23 18:44 UTC | 31 May 23 18:44 UTC |
	| start   | -p addons-748280               | addons-748280          | jenkins | v1.30.1 | 31 May 23 18:44 UTC | 31 May 23 18:47 UTC |
	|         | --wait=true --memory=4000      |                        |         |         |                     |                     |
	|         | --alsologtostderr              |                        |         |         |                     |                     |
	|         | --addons=registry              |                        |         |         |                     |                     |
	|         | --addons=metrics-server        |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots       |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver   |                        |         |         |                     |                     |
	|         | --addons=gcp-auth              |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner         |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget      |                        |         |         |                     |                     |
	|         | --driver=docker                |                        |         |         |                     |                     |
	|         | --container-runtime=crio       |                        |         |         |                     |                     |
	|         | --addons=ingress               |                        |         |         |                     |                     |
	|         | --addons=ingress-dns           |                        |         |         |                     |                     |
	| addons  | disable cloud-spanner -p       | addons-748280          | jenkins | v1.30.1 | 31 May 23 18:47 UTC | 31 May 23 18:47 UTC |
	|         | addons-748280                  |                        |         |         |                     |                     |
	| addons  | addons-748280 addons           | addons-748280          | jenkins | v1.30.1 | 31 May 23 18:47 UTC | 31 May 23 18:47 UTC |
	|         | disable metrics-server         |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1         |                        |         |         |                     |                     |
	| ip      | addons-748280 ip               | addons-748280          | jenkins | v1.30.1 | 31 May 23 18:47 UTC | 31 May 23 18:47 UTC |
	| addons  | disable inspektor-gadget -p    | addons-748280          | jenkins | v1.30.1 | 31 May 23 18:47 UTC | 31 May 23 18:47 UTC |
	|         | addons-748280                  |                        |         |         |                     |                     |
	| ssh     | addons-748280 ssh curl -s      | addons-748280          | jenkins | v1.30.1 | 31 May 23 18:47 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:    |                        |         |         |                     |                     |
	|         | nginx.example.com'             |                        |         |         |                     |                     |
	| ip      | addons-748280 ip               | addons-748280          | jenkins | v1.30.1 | 31 May 23 18:49 UTC | 31 May 23 18:49 UTC |
	| addons  | addons-748280 addons disable   | addons-748280          | jenkins | v1.30.1 | 31 May 23 18:50 UTC | 31 May 23 18:50 UTC |
	|         | registry --alsologtostderr     |                        |         |         |                     |                     |
	|         | -v=1                           |                        |         |         |                     |                     |
	| addons  | addons-748280 addons disable   | addons-748280          | jenkins | v1.30.1 | 31 May 23 18:50 UTC | 31 May 23 18:50 UTC |
	|         | ingress-dns --alsologtostderr  |                        |         |         |                     |                     |
	|         | -v=1                           |                        |         |         |                     |                     |
	| addons  | addons-748280 addons disable   | addons-748280          | jenkins | v1.30.1 | 31 May 23 18:50 UTC | 31 May 23 18:50 UTC |
	|         | ingress --alsologtostderr -v=1 |                        |         |         |                     |                     |
	|---------|--------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/05/31 18:44:34
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.20.4 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0531 18:44:34.199212    8307 out.go:296] Setting OutFile to fd 1 ...
	I0531 18:44:34.199335    8307 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0531 18:44:34.199345    8307 out.go:309] Setting ErrFile to fd 2...
	I0531 18:44:34.199350    8307 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0531 18:44:34.199520    8307 root.go:336] Updating PATH: /home/jenkins/minikube-integration/16569-2389/.minikube/bin
	I0531 18:44:34.199963    8307 out.go:303] Setting JSON to false
	I0531 18:44:34.200663    8307 start.go:127] hostinfo: {"hostname":"ip-172-31-21-244","uptime":1620,"bootTime":1685557055,"procs":146,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1037-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0531 18:44:34.200726    8307 start.go:137] virtualization:  
	I0531 18:44:34.203125    8307 out.go:177] * [addons-748280] minikube v1.30.1 on Ubuntu 20.04 (arm64)
	I0531 18:44:34.205045    8307 out.go:177]   - MINIKUBE_LOCATION=16569
	I0531 18:44:34.206660    8307 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0531 18:44:34.205135    8307 notify.go:220] Checking for updates...
	I0531 18:44:34.210461    8307 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/16569-2389/kubeconfig
	I0531 18:44:34.212215    8307 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/16569-2389/.minikube
	I0531 18:44:34.214212    8307 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0531 18:44:34.216012    8307 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0531 18:44:34.218055    8307 driver.go:375] Setting default libvirt URI to qemu:///system
	I0531 18:44:34.243399    8307 docker.go:121] docker version: linux-24.0.2:Docker Engine - Community
	I0531 18:44:34.243526    8307 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0531 18:44:34.330366    8307 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:38 SystemTime:2023-05-31 18:44:34.320213482 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1037-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215166976 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.5] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.18.1]] Warnings:<nil>}}
	I0531 18:44:34.330472    8307 docker.go:294] overlay module found
	I0531 18:44:34.333786    8307 out.go:177] * Using the docker driver based on user configuration
	I0531 18:44:34.335419    8307 start.go:297] selected driver: docker
	I0531 18:44:34.335438    8307 start.go:875] validating driver "docker" against <nil>
	I0531 18:44:34.335452    8307 start.go:886] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0531 18:44:34.336111    8307 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0531 18:44:34.396640    8307 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:38 SystemTime:2023-05-31 18:44:34.387230156 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1037-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215166976 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.5] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.18.1]] Warnings:<nil>}}
	I0531 18:44:34.396789    8307 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0531 18:44:34.397011    8307 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0531 18:44:34.398860    8307 out.go:177] * Using Docker driver with root privileges
	I0531 18:44:34.400901    8307 cni.go:84] Creating CNI manager for ""
	I0531 18:44:34.400926    8307 cni.go:142] "docker" driver + "crio" runtime found, recommending kindnet
	I0531 18:44:34.400936    8307 start_flags.go:314] Found "CNI" CNI - setting NetworkPlugin=cni
	I0531 18:44:34.400953    8307 start_flags.go:319] config:
	{Name:addons-748280 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:addons-748280 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRI
Socket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0531 18:44:34.403668    8307 out.go:177] * Starting control plane node addons-748280 in cluster addons-748280
	I0531 18:44:34.405618    8307 cache.go:122] Beginning downloading kic base image for docker with crio
	I0531 18:44:34.407574    8307 out.go:177] * Pulling base image ...
	I0531 18:44:34.409468    8307 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 in local docker daemon
	I0531 18:44:34.409518    8307 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime crio
	I0531 18:44:34.409559    8307 preload.go:148] Found local preload: /home/jenkins/minikube-integration/16569-2389/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-cri-o-overlay-arm64.tar.lz4
	I0531 18:44:34.409568    8307 cache.go:57] Caching tarball of preloaded images
	I0531 18:44:34.409628    8307 preload.go:174] Found /home/jenkins/minikube-integration/16569-2389/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0531 18:44:34.409638    8307 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.2 on crio
	I0531 18:44:34.409974    8307 profile.go:148] Saving config to /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/addons-748280/config.json ...
	I0531 18:44:34.409994    8307 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/addons-748280/config.json: {Name:mka7b556e1d2f2dbe052c145af41fb940259c005 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 18:44:34.426908    8307 cache.go:150] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 to local cache
	I0531 18:44:34.427017    8307 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 in local cache directory
	I0531 18:44:34.427043    8307 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 in local cache directory, skipping pull
	I0531 18:44:34.427052    8307 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 exists in cache, skipping pull
	I0531 18:44:34.427059    8307 cache.go:153] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 as a tarball
	I0531 18:44:34.427069    8307 cache.go:163] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 from local cache
	I0531 18:44:49.656212    8307 cache.go:165] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 from cached tarball
	I0531 18:44:49.656251    8307 cache.go:195] Successfully downloaded all kic artifacts
	I0531 18:44:49.656301    8307 start.go:364] acquiring machines lock for addons-748280: {Name:mkb49c926704a8994ccf8fa9f553fc7de82d6161 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0531 18:44:49.656425    8307 start.go:368] acquired machines lock for "addons-748280" in 100.659µs
	I0531 18:44:49.656458    8307 start.go:93] Provisioning new machine with config: &{Name:addons-748280 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:addons-748280 Namespace:default APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disabl
eMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0531 18:44:49.656539    8307 start.go:125] createHost starting for "" (driver="docker")
	I0531 18:44:49.658884    8307 out.go:204] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0531 18:44:49.659117    8307 start.go:159] libmachine.API.Create for "addons-748280" (driver="docker")
	I0531 18:44:49.659152    8307 client.go:168] LocalClient.Create starting
	I0531 18:44:49.659279    8307 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/16569-2389/.minikube/certs/ca.pem
	I0531 18:44:50.198993    8307 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/16569-2389/.minikube/certs/cert.pem
	I0531 18:44:50.547448    8307 cli_runner.go:164] Run: docker network inspect addons-748280 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0531 18:44:50.569103    8307 cli_runner.go:211] docker network inspect addons-748280 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0531 18:44:50.569207    8307 network_create.go:281] running [docker network inspect addons-748280] to gather additional debugging logs...
	I0531 18:44:50.569228    8307 cli_runner.go:164] Run: docker network inspect addons-748280
	W0531 18:44:50.592054    8307 cli_runner.go:211] docker network inspect addons-748280 returned with exit code 1
	I0531 18:44:50.592090    8307 network_create.go:284] error running [docker network inspect addons-748280]: docker network inspect addons-748280: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-748280 not found
	I0531 18:44:50.592102    8307 network_create.go:286] output of [docker network inspect addons-748280]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-748280 not found
	
	** /stderr **
	I0531 18:44:50.592176    8307 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0531 18:44:50.610931    8307 network.go:209] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4000b9ea50}
	I0531 18:44:50.610979    8307 network_create.go:123] attempt to create docker network addons-748280 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0531 18:44:50.611036    8307 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-748280 addons-748280
	I0531 18:44:50.683505    8307 network_create.go:107] docker network addons-748280 192.168.49.0/24 created
	I0531 18:44:50.683538    8307 kic.go:117] calculated static IP "192.168.49.2" for the "addons-748280" container
	I0531 18:44:50.683636    8307 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0531 18:44:50.701348    8307 cli_runner.go:164] Run: docker volume create addons-748280 --label name.minikube.sigs.k8s.io=addons-748280 --label created_by.minikube.sigs.k8s.io=true
	I0531 18:44:50.723389    8307 oci.go:103] Successfully created a docker volume addons-748280
	I0531 18:44:50.723482    8307 cli_runner.go:164] Run: docker run --rm --name addons-748280-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-748280 --entrypoint /usr/bin/test -v addons-748280:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 -d /var/lib
	I0531 18:44:52.986241    8307 cli_runner.go:217] Completed: docker run --rm --name addons-748280-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-748280 --entrypoint /usr/bin/test -v addons-748280:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 -d /var/lib: (2.262716848s)
	I0531 18:44:52.986275    8307 oci.go:107] Successfully prepared a docker volume addons-748280
	I0531 18:44:52.986300    8307 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime crio
	I0531 18:44:52.986318    8307 kic.go:190] Starting extracting preloaded images to volume ...
	I0531 18:44:52.986398    8307 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/16569-2389/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-748280:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 -I lz4 -xf /preloaded.tar -C /extractDir
	I0531 18:44:57.140932    8307 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/16569-2389/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-748280:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 -I lz4 -xf /preloaded.tar -C /extractDir: (4.154496534s)
	I0531 18:44:57.140964    8307 kic.go:199] duration metric: took 4.154642 seconds to extract preloaded images to volume
	W0531 18:44:57.141129    8307 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0531 18:44:57.141253    8307 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0531 18:44:57.207736    8307 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-748280 --name addons-748280 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-748280 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-748280 --network addons-748280 --ip 192.168.49.2 --volume addons-748280:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8
	I0531 18:44:57.600385    8307 cli_runner.go:164] Run: docker container inspect addons-748280 --format={{.State.Running}}
	I0531 18:44:57.630120    8307 cli_runner.go:164] Run: docker container inspect addons-748280 --format={{.State.Status}}
	I0531 18:44:57.665080    8307 cli_runner.go:164] Run: docker exec addons-748280 stat /var/lib/dpkg/alternatives/iptables
	I0531 18:44:57.754869    8307 oci.go:144] the created container "addons-748280" has a running status.
	I0531 18:44:57.754894    8307 kic.go:221] Creating ssh key for kic: /home/jenkins/minikube-integration/16569-2389/.minikube/machines/addons-748280/id_rsa...
	I0531 18:44:58.437004    8307 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/16569-2389/.minikube/machines/addons-748280/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0531 18:44:58.479968    8307 cli_runner.go:164] Run: docker container inspect addons-748280 --format={{.State.Status}}
	I0531 18:44:58.512482    8307 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0531 18:44:58.512501    8307 kic_runner.go:114] Args: [docker exec --privileged addons-748280 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0531 18:44:58.611991    8307 cli_runner.go:164] Run: docker container inspect addons-748280 --format={{.State.Status}}
	I0531 18:44:58.637094    8307 machine.go:88] provisioning docker machine ...
	I0531 18:44:58.637125    8307 ubuntu.go:169] provisioning hostname "addons-748280"
	I0531 18:44:58.637191    8307 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-748280
	I0531 18:44:58.663927    8307 main.go:141] libmachine: Using SSH client type: native
	I0531 18:44:58.664402    8307 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x39eaa0] 0x3a1430 <nil>  [] 0s} 127.0.0.1 32772 <nil> <nil>}
	I0531 18:44:58.664423    8307 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-748280 && echo "addons-748280" | sudo tee /etc/hostname
	I0531 18:44:58.820805    8307 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-748280
	
	I0531 18:44:58.820964    8307 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-748280
	I0531 18:44:58.843803    8307 main.go:141] libmachine: Using SSH client type: native
	I0531 18:44:58.844231    8307 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x39eaa0] 0x3a1430 <nil>  [] 0s} 127.0.0.1 32772 <nil> <nil>}
	I0531 18:44:58.844248    8307 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-748280' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-748280/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-748280' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0531 18:44:58.979966    8307 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0531 18:44:58.979994    8307 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/16569-2389/.minikube CaCertPath:/home/jenkins/minikube-integration/16569-2389/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16569-2389/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16569-2389/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16569-2389/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16569-2389/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16569-2389/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16569-2389/.minikube}
	I0531 18:44:58.980012    8307 ubuntu.go:177] setting up certificates
	I0531 18:44:58.980021    8307 provision.go:83] configureAuth start
	I0531 18:44:58.980087    8307 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-748280
	I0531 18:44:58.998497    8307 provision.go:138] copyHostCerts
	I0531 18:44:58.998601    8307 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16569-2389/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16569-2389/.minikube/ca.pem (1078 bytes)
	I0531 18:44:58.998763    8307 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16569-2389/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16569-2389/.minikube/cert.pem (1123 bytes)
	I0531 18:44:58.998850    8307 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16569-2389/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16569-2389/.minikube/key.pem (1679 bytes)
	I0531 18:44:58.998915    8307 provision.go:112] generating server cert: /home/jenkins/minikube-integration/16569-2389/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16569-2389/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16569-2389/.minikube/certs/ca-key.pem org=jenkins.addons-748280 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube addons-748280]
	I0531 18:44:59.525131    8307 provision.go:172] copyRemoteCerts
	I0531 18:44:59.525203    8307 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0531 18:44:59.525250    8307 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-748280
	I0531 18:44:59.544897    8307 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/16569-2389/.minikube/machines/addons-748280/id_rsa Username:docker}
	I0531 18:44:59.641671    8307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16569-2389/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0531 18:44:59.670850    8307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16569-2389/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0531 18:44:59.700812    8307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16569-2389/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0531 18:44:59.729661    8307 provision.go:86] duration metric: configureAuth took 749.626443ms
	I0531 18:44:59.729776    8307 ubuntu.go:193] setting minikube options for container-runtime
	I0531 18:44:59.729976    8307 config.go:182] Loaded profile config "addons-748280": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.2
	I0531 18:44:59.730079    8307 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-748280
	I0531 18:44:59.748685    8307 main.go:141] libmachine: Using SSH client type: native
	I0531 18:44:59.749114    8307 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x39eaa0] 0x3a1430 <nil>  [] 0s} 127.0.0.1 32772 <nil> <nil>}
	I0531 18:44:59.749136    8307 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0531 18:45:00.020308    8307 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0531 18:45:00.020391    8307 machine.go:91] provisioned docker machine in 1.383275276s
	I0531 18:45:00.020418    8307 client.go:171] LocalClient.Create took 10.361256105s
	I0531 18:45:00.020489    8307 start.go:167] duration metric: libmachine.API.Create for "addons-748280" took 10.361330976s
	I0531 18:45:00.020541    8307 start.go:300] post-start starting for "addons-748280" (driver="docker")
	I0531 18:45:00.020576    8307 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0531 18:45:00.020729    8307 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0531 18:45:00.020817    8307 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-748280
	I0531 18:45:00.081030    8307 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/16569-2389/.minikube/machines/addons-748280/id_rsa Username:docker}
	I0531 18:45:00.202521    8307 ssh_runner.go:195] Run: cat /etc/os-release
	I0531 18:45:00.208046    8307 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0531 18:45:00.208087    8307 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0531 18:45:00.208100    8307 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0531 18:45:00.208106    8307 info.go:137] Remote host: Ubuntu 22.04.2 LTS
	I0531 18:45:00.208117    8307 filesync.go:126] Scanning /home/jenkins/minikube-integration/16569-2389/.minikube/addons for local assets ...
	I0531 18:45:00.208202    8307 filesync.go:126] Scanning /home/jenkins/minikube-integration/16569-2389/.minikube/files for local assets ...
	I0531 18:45:00.208246    8307 start.go:303] post-start completed in 187.671698ms
	I0531 18:45:00.208591    8307 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-748280
	I0531 18:45:00.230526    8307 profile.go:148] Saving config to /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/addons-748280/config.json ...
	I0531 18:45:00.230917    8307 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0531 18:45:00.230981    8307 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-748280
	I0531 18:45:00.252554    8307 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/16569-2389/.minikube/machines/addons-748280/id_rsa Username:docker}
	I0531 18:45:00.345419    8307 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0531 18:45:00.351940    8307 start.go:128] duration metric: createHost completed in 10.695386174s
	I0531 18:45:00.351966    8307 start.go:83] releasing machines lock for "addons-748280", held for 10.695527186s
	I0531 18:45:00.352041    8307 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-748280
	I0531 18:45:00.372335    8307 ssh_runner.go:195] Run: cat /version.json
	I0531 18:45:00.372354    8307 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0531 18:45:00.372400    8307 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-748280
	I0531 18:45:00.372427    8307 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-748280
	I0531 18:45:00.405393    8307 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/16569-2389/.minikube/machines/addons-748280/id_rsa Username:docker}
	I0531 18:45:00.406393    8307 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/16569-2389/.minikube/machines/addons-748280/id_rsa Username:docker}
	I0531 18:45:00.500117    8307 ssh_runner.go:195] Run: systemctl --version
	I0531 18:45:00.640250    8307 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0531 18:45:00.787841    8307 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0531 18:45:00.793315    8307 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0531 18:45:00.819180    8307 cni.go:220] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0531 18:45:00.819261    8307 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0531 18:45:00.865192    8307 cni.go:261] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0531 18:45:00.865251    8307 start.go:481] detecting cgroup driver to use...
	I0531 18:45:00.865297    8307 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0531 18:45:00.865370    8307 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0531 18:45:00.885925    8307 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0531 18:45:00.900754    8307 docker.go:193] disabling cri-docker service (if available) ...
	I0531 18:45:00.900842    8307 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0531 18:45:00.917150    8307 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0531 18:45:00.935795    8307 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0531 18:45:01.037224    8307 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0531 18:45:01.147629    8307 docker.go:209] disabling docker service ...
	I0531 18:45:01.147704    8307 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0531 18:45:01.169926    8307 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0531 18:45:01.184167    8307 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0531 18:45:01.290326    8307 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0531 18:45:01.402641    8307 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0531 18:45:01.416289    8307 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0531 18:45:01.436394    8307 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0531 18:45:01.436510    8307 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0531 18:45:01.449935    8307 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0531 18:45:01.450005    8307 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0531 18:45:01.462473    8307 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0531 18:45:01.474607    8307 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0531 18:45:01.486889    8307 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0531 18:45:01.498654    8307 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0531 18:45:01.509731    8307 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0531 18:45:01.520743    8307 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0531 18:45:01.608658    8307 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0531 18:45:01.730095    8307 start.go:528] Will wait 60s for socket path /var/run/crio/crio.sock
	I0531 18:45:01.730173    8307 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0531 18:45:01.735016    8307 start.go:549] Will wait 60s for crictl version
	I0531 18:45:01.735079    8307 ssh_runner.go:195] Run: which crictl
	I0531 18:45:01.739776    8307 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0531 18:45:01.782322    8307 start.go:565] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.5
	RuntimeApiVersion:  v1
	I0531 18:45:01.782510    8307 ssh_runner.go:195] Run: crio --version
	I0531 18:45:01.826461    8307 ssh_runner.go:195] Run: crio --version
	I0531 18:45:01.871612    8307 out.go:177] * Preparing Kubernetes v1.27.2 on CRI-O 1.24.5 ...
	I0531 18:45:01.873172    8307 cli_runner.go:164] Run: docker network inspect addons-748280 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0531 18:45:01.891312    8307 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0531 18:45:01.896148    8307 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0531 18:45:01.910311    8307 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime crio
	I0531 18:45:01.910384    8307 ssh_runner.go:195] Run: sudo crictl images --output json
	I0531 18:45:01.980188    8307 crio.go:496] all images are preloaded for cri-o runtime.
	I0531 18:45:01.980219    8307 crio.go:415] Images already preloaded, skipping extraction
	I0531 18:45:01.980281    8307 ssh_runner.go:195] Run: sudo crictl images --output json
	I0531 18:45:02.028255    8307 crio.go:496] all images are preloaded for cri-o runtime.
	I0531 18:45:02.028278    8307 cache_images.go:84] Images are preloaded, skipping loading
	I0531 18:45:02.028355    8307 ssh_runner.go:195] Run: crio config
	I0531 18:45:02.088423    8307 cni.go:84] Creating CNI manager for ""
	I0531 18:45:02.088489    8307 cni.go:142] "docker" driver + "crio" runtime found, recommending kindnet
	I0531 18:45:02.088510    8307 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0531 18:45:02.088531    8307 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.27.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-748280 NodeName:addons-748280 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0531 18:45:02.088679    8307 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-748280"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.27.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0531 18:45:02.088778    8307 kubeadm.go:971] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.27.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=addons-748280 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.27.2 ClusterName:addons-748280 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0531 18:45:02.088850    8307 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.27.2
	I0531 18:45:02.100181    8307 binaries.go:44] Found k8s binaries, skipping transfer
	I0531 18:45:02.100327    8307 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0531 18:45:02.111391    8307 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (423 bytes)
	I0531 18:45:02.134464    8307 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0531 18:45:02.158425    8307 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2094 bytes)
	I0531 18:45:02.181347    8307 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0531 18:45:02.186988    8307 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0531 18:45:02.200937    8307 certs.go:56] Setting up /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/addons-748280 for IP: 192.168.49.2
	I0531 18:45:02.200969    8307 certs.go:190] acquiring lock for shared ca certs: {Name:mk0147accf8b8da231d39646bdc89fced67451cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 18:45:02.201099    8307 certs.go:204] generating minikubeCA CA: /home/jenkins/minikube-integration/16569-2389/.minikube/ca.key
	I0531 18:45:02.559806    8307 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/16569-2389/.minikube/ca.crt ...
	I0531 18:45:02.559836    8307 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16569-2389/.minikube/ca.crt: {Name:mk1ba87ff99ad095694275f285b29b67f66bdcd2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 18:45:02.560017    8307 crypto.go:164] Writing key to /home/jenkins/minikube-integration/16569-2389/.minikube/ca.key ...
	I0531 18:45:02.560029    8307 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16569-2389/.minikube/ca.key: {Name:mk89234849bfd4ebf31d5cca0486baba56b6f968 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 18:45:02.560115    8307 certs.go:204] generating proxyClientCA CA: /home/jenkins/minikube-integration/16569-2389/.minikube/proxy-client-ca.key
	I0531 18:45:03.034284    8307 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/16569-2389/.minikube/proxy-client-ca.crt ...
	I0531 18:45:03.034315    8307 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16569-2389/.minikube/proxy-client-ca.crt: {Name:mk6ea7cc75db9fa0483654cf8f122fd3b0e3609c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 18:45:03.034500    8307 crypto.go:164] Writing key to /home/jenkins/minikube-integration/16569-2389/.minikube/proxy-client-ca.key ...
	I0531 18:45:03.034512    8307 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16569-2389/.minikube/proxy-client-ca.key: {Name:mkf918c2ce14783aab516b848bd9c9e74db86d4d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 18:45:03.034635    8307 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/addons-748280/client.key
	I0531 18:45:03.034652    8307 crypto.go:68] Generating cert /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/addons-748280/client.crt with IP's: []
	I0531 18:45:03.595796    8307 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/addons-748280/client.crt ...
	I0531 18:45:03.595826    8307 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/addons-748280/client.crt: {Name:mkba432b72effdb186ae16d5dfa242c36c5ccf2d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 18:45:03.596020    8307 crypto.go:164] Writing key to /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/addons-748280/client.key ...
	I0531 18:45:03.596033    8307 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/addons-748280/client.key: {Name:mkbb3264ea9ca332563dc8e996b5eaa1af5da2c8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 18:45:03.596120    8307 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/addons-748280/apiserver.key.dd3b5fb2
	I0531 18:45:03.596138    8307 crypto.go:68] Generating cert /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/addons-748280/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0531 18:45:04.086595    8307 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/addons-748280/apiserver.crt.dd3b5fb2 ...
	I0531 18:45:04.086628    8307 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/addons-748280/apiserver.crt.dd3b5fb2: {Name:mk92505edbaa43f853457c596ae242259dfc280e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 18:45:04.086844    8307 crypto.go:164] Writing key to /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/addons-748280/apiserver.key.dd3b5fb2 ...
	I0531 18:45:04.086860    8307 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/addons-748280/apiserver.key.dd3b5fb2: {Name:mk13060a044e0f51fbe1670d26b3d49304e52c4f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 18:45:04.086947    8307 certs.go:337] copying /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/addons-748280/apiserver.crt.dd3b5fb2 -> /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/addons-748280/apiserver.crt
	I0531 18:45:04.087018    8307 certs.go:341] copying /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/addons-748280/apiserver.key.dd3b5fb2 -> /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/addons-748280/apiserver.key
	I0531 18:45:04.087069    8307 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/addons-748280/proxy-client.key
	I0531 18:45:04.087088    8307 crypto.go:68] Generating cert /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/addons-748280/proxy-client.crt with IP's: []
	I0531 18:45:04.855975    8307 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/addons-748280/proxy-client.crt ...
	I0531 18:45:04.856005    8307 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/addons-748280/proxy-client.crt: {Name:mk779e87f69baa06f87ee439c4e4bba857c5ab50 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 18:45:04.856197    8307 crypto.go:164] Writing key to /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/addons-748280/proxy-client.key ...
	I0531 18:45:04.856210    8307 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/addons-748280/proxy-client.key: {Name:mkd3f8fd1539e560140df2154fd5479bb0686a7f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 18:45:04.856396    8307 certs.go:437] found cert: /home/jenkins/minikube-integration/16569-2389/.minikube/certs/home/jenkins/minikube-integration/16569-2389/.minikube/certs/ca-key.pem (1675 bytes)
	I0531 18:45:04.856438    8307 certs.go:437] found cert: /home/jenkins/minikube-integration/16569-2389/.minikube/certs/home/jenkins/minikube-integration/16569-2389/.minikube/certs/ca.pem (1078 bytes)
	I0531 18:45:04.856463    8307 certs.go:437] found cert: /home/jenkins/minikube-integration/16569-2389/.minikube/certs/home/jenkins/minikube-integration/16569-2389/.minikube/certs/cert.pem (1123 bytes)
	I0531 18:45:04.856493    8307 certs.go:437] found cert: /home/jenkins/minikube-integration/16569-2389/.minikube/certs/home/jenkins/minikube-integration/16569-2389/.minikube/certs/key.pem (1679 bytes)
	I0531 18:45:04.857067    8307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/addons-748280/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0531 18:45:04.885717    8307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/addons-748280/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0531 18:45:04.914158    8307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/addons-748280/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0531 18:45:04.943134    8307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/addons-748280/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0531 18:45:04.972275    8307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16569-2389/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0531 18:45:05.003205    8307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16569-2389/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0531 18:45:05.034089    8307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16569-2389/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0531 18:45:05.062298    8307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16569-2389/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0531 18:45:05.092000    8307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16569-2389/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0531 18:45:05.120889    8307 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0531 18:45:05.142302    8307 ssh_runner.go:195] Run: openssl version
	I0531 18:45:05.150342    8307 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0531 18:45:05.162861    8307 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0531 18:45:05.167636    8307 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 May 31 18:45 /usr/share/ca-certificates/minikubeCA.pem
	I0531 18:45:05.167744    8307 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0531 18:45:05.176462    8307 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0531 18:45:05.187975    8307 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0531 18:45:05.192402    8307 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0531 18:45:05.192457    8307 kubeadm.go:404] StartCluster: {Name:addons-748280 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:addons-748280 Namespace:default APIServerName:minikubeCA APIServerNames:[] APISe
rverIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:f
alse CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0531 18:45:05.192538    8307 cri.go:53] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0531 18:45:05.192593    8307 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0531 18:45:05.234708    8307 cri.go:88] found id: ""
	I0531 18:45:05.234799    8307 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0531 18:45:05.245430    8307 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0531 18:45:05.256262    8307 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0531 18:45:05.256325    8307 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0531 18:45:05.267171    8307 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0531 18:45:05.267216    8307 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0531 18:45:05.371934    8307 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1037-aws\n", err: exit status 1
	I0531 18:45:05.457543    8307 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0531 18:45:05.457753    8307 kubeadm.go:322] W0531 18:45:05.457018    1052 images.go:80] could not find officially supported version of etcd for Kubernetes v1.27.2, falling back to the nearest etcd version (3.5.7-0)
	I0531 18:45:12.746265    8307 kubeadm.go:322] W0531 18:45:12.745910    1052 images.go:80] could not find officially supported version of etcd for Kubernetes v1.27.2, falling back to the nearest etcd version (3.5.7-0)
	I0531 18:45:21.741713    8307 kubeadm.go:322] [init] Using Kubernetes version: v1.27.2
	I0531 18:45:21.741776    8307 kubeadm.go:322] [preflight] Running pre-flight checks
	I0531 18:45:21.741864    8307 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I0531 18:45:21.741930    8307 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1037-aws
	I0531 18:45:21.741968    8307 kubeadm.go:322] OS: Linux
	I0531 18:45:21.742054    8307 kubeadm.go:322] CGROUPS_CPU: enabled
	I0531 18:45:21.742135    8307 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I0531 18:45:21.742227    8307 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I0531 18:45:21.742316    8307 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I0531 18:45:21.742370    8307 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I0531 18:45:21.742471    8307 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I0531 18:45:21.742543    8307 kubeadm.go:322] CGROUPS_PIDS: enabled
	I0531 18:45:21.742612    8307 kubeadm.go:322] CGROUPS_HUGETLB: enabled
	I0531 18:45:21.742679    8307 kubeadm.go:322] CGROUPS_BLKIO: enabled
	I0531 18:45:21.742788    8307 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0531 18:45:21.742955    8307 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0531 18:45:21.743059    8307 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0531 18:45:21.743132    8307 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0531 18:45:21.744938    8307 out.go:204]   - Generating certificates and keys ...
	I0531 18:45:21.745041    8307 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0531 18:45:21.745105    8307 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0531 18:45:21.745175    8307 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0531 18:45:21.745237    8307 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0531 18:45:21.745300    8307 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0531 18:45:21.745351    8307 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0531 18:45:21.745407    8307 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0531 18:45:21.745520    8307 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [addons-748280 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0531 18:45:21.745576    8307 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0531 18:45:21.745687    8307 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [addons-748280 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0531 18:45:21.745753    8307 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0531 18:45:21.745816    8307 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0531 18:45:21.745864    8307 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0531 18:45:21.745921    8307 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0531 18:45:21.745984    8307 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0531 18:45:21.746036    8307 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0531 18:45:21.746103    8307 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0531 18:45:21.746160    8307 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0531 18:45:21.746260    8307 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0531 18:45:21.746344    8307 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0531 18:45:21.746384    8307 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0531 18:45:21.746454    8307 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0531 18:45:21.748644    8307 out.go:204]   - Booting up control plane ...
	I0531 18:45:21.748789    8307 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0531 18:45:21.748876    8307 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0531 18:45:21.748975    8307 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0531 18:45:21.749067    8307 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0531 18:45:21.749260    8307 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0531 18:45:21.749357    8307 kubeadm.go:322] [apiclient] All control plane components are healthy after 7.502867 seconds
	I0531 18:45:21.749490    8307 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0531 18:45:21.749655    8307 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0531 18:45:21.749730    8307 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0531 18:45:21.749941    8307 kubeadm.go:322] [mark-control-plane] Marking the node addons-748280 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0531 18:45:21.750026    8307 kubeadm.go:322] [bootstrap-token] Using token: 9v29xc.k4dxqpcgpeqcxvgr
	I0531 18:45:21.751735    8307 out.go:204]   - Configuring RBAC rules ...
	I0531 18:45:21.751852    8307 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0531 18:45:21.751939    8307 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0531 18:45:21.752080    8307 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0531 18:45:21.752208    8307 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0531 18:45:21.752321    8307 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0531 18:45:21.752428    8307 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0531 18:45:21.752545    8307 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0531 18:45:21.752590    8307 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0531 18:45:21.752638    8307 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0531 18:45:21.752647    8307 kubeadm.go:322] 
	I0531 18:45:21.752704    8307 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0531 18:45:21.752711    8307 kubeadm.go:322] 
	I0531 18:45:21.752784    8307 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0531 18:45:21.752792    8307 kubeadm.go:322] 
	I0531 18:45:21.752818    8307 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0531 18:45:21.752876    8307 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0531 18:45:21.752928    8307 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0531 18:45:21.752936    8307 kubeadm.go:322] 
	I0531 18:45:21.752987    8307 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0531 18:45:21.752993    8307 kubeadm.go:322] 
	I0531 18:45:21.753038    8307 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0531 18:45:21.753046    8307 kubeadm.go:322] 
	I0531 18:45:21.753096    8307 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0531 18:45:21.753170    8307 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0531 18:45:21.753238    8307 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0531 18:45:21.753246    8307 kubeadm.go:322] 
	I0531 18:45:21.753325    8307 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0531 18:45:21.753401    8307 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0531 18:45:21.753409    8307 kubeadm.go:322] 
	I0531 18:45:21.753488    8307 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 9v29xc.k4dxqpcgpeqcxvgr \
	I0531 18:45:21.753589    8307 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:8cab06d0df96335aa364fa490e6822dfb6067993e3a4e02e6ce54947f37c8db2 \
	I0531 18:45:21.753610    8307 kubeadm.go:322] 	--control-plane 
	I0531 18:45:21.753619    8307 kubeadm.go:322] 
	I0531 18:45:21.753698    8307 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0531 18:45:21.753707    8307 kubeadm.go:322] 
	I0531 18:45:21.753784    8307 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 9v29xc.k4dxqpcgpeqcxvgr \
	I0531 18:45:21.753900    8307 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:8cab06d0df96335aa364fa490e6822dfb6067993e3a4e02e6ce54947f37c8db2 
	I0531 18:45:21.753912    8307 cni.go:84] Creating CNI manager for ""
	I0531 18:45:21.753919    8307 cni.go:142] "docker" driver + "crio" runtime found, recommending kindnet
	I0531 18:45:21.755571    8307 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0531 18:45:21.757225    8307 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0531 18:45:21.774931    8307 cni.go:181] applying CNI manifest using /var/lib/minikube/binaries/v1.27.2/kubectl ...
	I0531 18:45:21.774957    8307 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0531 18:45:21.838508    8307 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0531 18:45:22.759068    8307 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0531 18:45:22.759191    8307 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:45:22.759267    8307 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl label nodes minikube.k8s.io/version=v1.30.1 minikube.k8s.io/commit=7022875d4a054c2d518e5e5a7b9d500799d50140 minikube.k8s.io/name=addons-748280 minikube.k8s.io/updated_at=2023_05_31T18_45_22_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:45:22.964637    8307 ops.go:34] apiserver oom_adj: -16
	I0531 18:45:22.964721    8307 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:45:23.601498    8307 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:45:24.101513    8307 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:45:24.601590    8307 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:45:25.101626    8307 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:45:25.600916    8307 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:45:26.101546    8307 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:45:26.601381    8307 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:45:27.101618    8307 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:45:27.601855    8307 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:45:28.101801    8307 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:45:28.601430    8307 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:45:29.100961    8307 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:45:29.600906    8307 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:45:30.101618    8307 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:45:30.601011    8307 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:45:31.100970    8307 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:45:31.600858    8307 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:45:32.100910    8307 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:45:32.601548    8307 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:45:33.101488    8307 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:45:33.600908    8307 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:45:34.101578    8307 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:45:34.601367    8307 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:45:34.717975    8307 kubeadm.go:1076] duration metric: took 11.95883055s to wait for elevateKubeSystemPrivileges.
	I0531 18:45:34.718007    8307 kubeadm.go:406] StartCluster complete in 29.525553835s
	I0531 18:45:34.718023    8307 settings.go:142] acquiring lock: {Name:mk7112454687e7bda5617b0aa762b583179f0f78 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 18:45:34.718179    8307 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/16569-2389/kubeconfig
	I0531 18:45:34.719378    8307 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16569-2389/kubeconfig: {Name:mk0c7b1a200a0a97aa7bf4307790fd99336ec425 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 18:45:34.721085    8307 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0531 18:45:34.722158    8307 addons.go:496] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:true]
	I0531 18:45:34.722396    8307 addons.go:66] Setting volumesnapshots=true in profile "addons-748280"
	I0531 18:45:34.722422    8307 addons.go:228] Setting addon volumesnapshots=true in "addons-748280"
	I0531 18:45:34.722474    8307 host.go:66] Checking if "addons-748280" exists ...
	I0531 18:45:34.723365    8307 cli_runner.go:164] Run: docker container inspect addons-748280 --format={{.State.Status}}
	I0531 18:45:34.724770    8307 addons.go:66] Setting gcp-auth=true in profile "addons-748280"
	I0531 18:45:34.724807    8307 mustload.go:65] Loading cluster: addons-748280
	I0531 18:45:34.725139    8307 config.go:182] Loaded profile config "addons-748280": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.2
	I0531 18:45:34.725527    8307 cli_runner.go:164] Run: docker container inspect addons-748280 --format={{.State.Status}}
	I0531 18:45:34.725895    8307 config.go:182] Loaded profile config "addons-748280": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.2
	I0531 18:45:34.725958    8307 addons.go:66] Setting cloud-spanner=true in profile "addons-748280"
	I0531 18:45:34.725980    8307 addons.go:228] Setting addon cloud-spanner=true in "addons-748280"
	I0531 18:45:34.726035    8307 host.go:66] Checking if "addons-748280" exists ...
	I0531 18:45:34.726709    8307 cli_runner.go:164] Run: docker container inspect addons-748280 --format={{.State.Status}}
	I0531 18:45:34.742054    8307 addons.go:66] Setting csi-hostpath-driver=true in profile "addons-748280"
	I0531 18:45:34.742146    8307 addons.go:228] Setting addon csi-hostpath-driver=true in "addons-748280"
	I0531 18:45:34.742205    8307 host.go:66] Checking if "addons-748280" exists ...
	I0531 18:45:34.742960    8307 cli_runner.go:164] Run: docker container inspect addons-748280 --format={{.State.Status}}
	I0531 18:45:34.756907    8307 addons.go:66] Setting default-storageclass=true in profile "addons-748280"
	I0531 18:45:34.756962    8307 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-748280"
	I0531 18:45:34.757495    8307 cli_runner.go:164] Run: docker container inspect addons-748280 --format={{.State.Status}}
	I0531 18:45:34.757749    8307 addons.go:66] Setting ingress=true in profile "addons-748280"
	I0531 18:45:34.757801    8307 addons.go:228] Setting addon ingress=true in "addons-748280"
	I0531 18:45:34.757899    8307 host.go:66] Checking if "addons-748280" exists ...
	I0531 18:45:34.758615    8307 cli_runner.go:164] Run: docker container inspect addons-748280 --format={{.State.Status}}
	I0531 18:45:34.758875    8307 addons.go:66] Setting ingress-dns=true in profile "addons-748280"
	I0531 18:45:34.758903    8307 addons.go:228] Setting addon ingress-dns=true in "addons-748280"
	I0531 18:45:34.758984    8307 host.go:66] Checking if "addons-748280" exists ...
	I0531 18:45:34.759645    8307 cli_runner.go:164] Run: docker container inspect addons-748280 --format={{.State.Status}}
	I0531 18:45:34.763025    8307 addons.go:66] Setting inspektor-gadget=true in profile "addons-748280"
	I0531 18:45:34.763058    8307 addons.go:228] Setting addon inspektor-gadget=true in "addons-748280"
	I0531 18:45:34.763203    8307 host.go:66] Checking if "addons-748280" exists ...
	I0531 18:45:34.777113    8307 cli_runner.go:164] Run: docker container inspect addons-748280 --format={{.State.Status}}
	I0531 18:45:34.799579    8307 addons.go:66] Setting metrics-server=true in profile "addons-748280"
	I0531 18:45:34.799620    8307 addons.go:228] Setting addon metrics-server=true in "addons-748280"
	I0531 18:45:34.799699    8307 host.go:66] Checking if "addons-748280" exists ...
	I0531 18:45:34.800348    8307 cli_runner.go:164] Run: docker container inspect addons-748280 --format={{.State.Status}}
	I0531 18:45:34.800514    8307 addons.go:66] Setting registry=true in profile "addons-748280"
	I0531 18:45:34.800530    8307 addons.go:228] Setting addon registry=true in "addons-748280"
	I0531 18:45:34.800574    8307 host.go:66] Checking if "addons-748280" exists ...
	I0531 18:45:34.801066    8307 cli_runner.go:164] Run: docker container inspect addons-748280 --format={{.State.Status}}
	I0531 18:45:34.801160    8307 addons.go:66] Setting storage-provisioner=true in profile "addons-748280"
	I0531 18:45:34.801169    8307 addons.go:228] Setting addon storage-provisioner=true in "addons-748280"
	I0531 18:45:34.801207    8307 host.go:66] Checking if "addons-748280" exists ...
	I0531 18:45:34.801666    8307 cli_runner.go:164] Run: docker container inspect addons-748280 --format={{.State.Status}}
	I0531 18:45:34.843678    8307 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0531 18:45:34.847907    8307 addons.go:420] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0531 18:45:34.847940    8307 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0531 18:45:34.848028    8307 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-748280
	I0531 18:45:34.854229    8307 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.5
	I0531 18:45:34.856396    8307 addons.go:420] installing /etc/kubernetes/addons/deployment.yaml
	I0531 18:45:34.856416    8307 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1003 bytes)
	I0531 18:45:34.856507    8307 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-748280
	I0531 18:45:34.859953    8307 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0531 18:45:34.864384    8307 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0531 18:45:34.866240    8307 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0531 18:45:34.867941    8307 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0531 18:45:34.871142    8307 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0531 18:45:34.872755    8307 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0531 18:45:34.890996    8307 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0531 18:45:34.895486    8307 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0531 18:45:34.897367    8307 addons.go:420] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0531 18:45:34.897391    8307 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0531 18:45:34.897457    8307 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-748280
	I0531 18:45:35.034638    8307 out.go:177]   - Using image docker.io/registry:2.8.1
	I0531 18:45:35.028066    8307 host.go:66] Checking if "addons-748280" exists ...
	I0531 18:45:35.048115    8307 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.6.3
	I0531 18:45:35.049916    8307 addons.go:420] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0531 18:45:35.049964    8307 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0531 18:45:35.050059    8307 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-748280
	I0531 18:45:35.048126    8307 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.7.0
	I0531 18:45:35.048134    8307 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.16.1
	I0531 18:45:35.066152    8307 addons.go:420] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0531 18:45:35.066181    8307 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0531 18:45:35.066270    8307 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-748280
	I0531 18:45:35.062895    8307 out.go:177]   - Using image gcr.io/google_containers/kube-registry-proxy:0.4
	I0531 18:45:35.077212    8307 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I0531 18:45:35.082693    8307 addons.go:420] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0531 18:45:35.082831    8307 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0531 18:45:35.083049    8307 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-748280
	I0531 18:45:35.080231    8307 addons.go:420] installing /etc/kubernetes/addons/registry-rc.yaml
	I0531 18:45:35.083373    8307 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I0531 18:45:35.083461    8307 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-748280
	I0531 18:45:35.087743    8307 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20230312-helm-chart-4.5.2-28-g66a760794
	I0531 18:45:35.086421    8307 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.27.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0531 18:45:35.096188    8307 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20230312-helm-chart-4.5.2-28-g66a760794
	I0531 18:45:35.098367    8307 addons.go:420] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0531 18:45:35.098425    8307 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16145 bytes)
	I0531 18:45:35.098522    8307 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-748280
	I0531 18:45:35.109057    8307 addons.go:228] Setting addon default-storageclass=true in "addons-748280"
	I0531 18:45:35.109166    8307 host.go:66] Checking if "addons-748280" exists ...
	I0531 18:45:35.109795    8307 cli_runner.go:164] Run: docker container inspect addons-748280 --format={{.State.Status}}
	I0531 18:45:35.134953    8307 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/16569-2389/.minikube/machines/addons-748280/id_rsa Username:docker}
	I0531 18:45:35.154085    8307 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/16569-2389/.minikube/machines/addons-748280/id_rsa Username:docker}
	I0531 18:45:35.173325    8307 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/16569-2389/.minikube/machines/addons-748280/id_rsa Username:docker}
	I0531 18:45:35.202452    8307 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/16569-2389/.minikube/machines/addons-748280/id_rsa Username:docker}
	I0531 18:45:35.228466    8307 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0531 18:45:35.239095    8307 addons.go:420] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0531 18:45:35.239114    8307 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0531 18:45:35.239177    8307 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-748280
	I0531 18:45:35.240831    8307 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/16569-2389/.minikube/machines/addons-748280/id_rsa Username:docker}
	I0531 18:45:35.312438    8307 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/16569-2389/.minikube/machines/addons-748280/id_rsa Username:docker}
	I0531 18:45:35.331207    8307 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/16569-2389/.minikube/machines/addons-748280/id_rsa Username:docker}
	I0531 18:45:35.392830    8307 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/16569-2389/.minikube/machines/addons-748280/id_rsa Username:docker}
	I0531 18:45:35.394504    8307 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/16569-2389/.minikube/machines/addons-748280/id_rsa Username:docker}
	I0531 18:45:35.401783    8307 addons.go:420] installing /etc/kubernetes/addons/storageclass.yaml
	I0531 18:45:35.401804    8307 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0531 18:45:35.401865    8307 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-748280
	I0531 18:45:35.456019    8307 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/16569-2389/.minikube/machines/addons-748280/id_rsa Username:docker}
	I0531 18:45:35.604341    8307 addons.go:420] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0531 18:45:35.604378    8307 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0531 18:45:35.610835    8307 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0531 18:45:35.672621    8307 addons.go:420] installing /etc/kubernetes/addons/registry-svc.yaml
	I0531 18:45:35.672640    8307 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0531 18:45:35.676037    8307 addons.go:420] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0531 18:45:35.676055    8307 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0531 18:45:35.723105    8307 addons.go:420] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0531 18:45:35.723174    8307 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0531 18:45:35.750031    8307 addons.go:420] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0531 18:45:35.750095    8307 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0531 18:45:35.755152    8307 addons.go:420] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0531 18:45:35.755215    8307 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0531 18:45:35.839029    8307 addons.go:420] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0531 18:45:35.839050    8307 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0531 18:45:35.870498    8307 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0531 18:45:35.889628    8307 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0531 18:45:35.912039    8307 addons.go:420] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0531 18:45:35.912058    8307 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (950 bytes)
	I0531 18:45:35.927616    8307 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0531 18:45:35.931936    8307 addons.go:420] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0531 18:45:35.932004    8307 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0531 18:45:35.961281    8307 addons.go:420] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0531 18:45:35.961339    8307 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0531 18:45:35.982944    8307 addons.go:420] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0531 18:45:35.983001    8307 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0531 18:45:36.035259    8307 addons.go:420] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0531 18:45:36.035318    8307 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0531 18:45:36.079408    8307 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0531 18:45:36.093709    8307 addons.go:420] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0531 18:45:36.093775    8307 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0531 18:45:36.123904    8307 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0531 18:45:36.144705    8307 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0531 18:45:36.148012    8307 addons.go:420] installing /etc/kubernetes/addons/ig-role.yaml
	I0531 18:45:36.148037    8307 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0531 18:45:36.189986    8307 addons.go:420] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0531 18:45:36.190012    8307 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0531 18:45:36.221682    8307 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0531 18:45:36.303622    8307 addons.go:420] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0531 18:45:36.303647    8307 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0531 18:45:36.324004    8307 addons.go:420] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0531 18:45:36.324029    8307 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0531 18:45:36.464543    8307 addons.go:420] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0531 18:45:36.464574    8307 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0531 18:45:36.483044    8307 addons.go:420] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0531 18:45:36.483068    8307 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0531 18:45:36.583047    8307 addons.go:420] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0531 18:45:36.583115    8307 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0531 18:45:36.604836    8307 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-748280" context rescaled to 1 replicas
	I0531 18:45:36.604922    8307 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0531 18:45:36.608052    8307 out.go:177] * Verifying Kubernetes components...
	I0531 18:45:36.609755    8307 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0531 18:45:36.631756    8307 addons.go:420] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0531 18:45:36.631818    8307 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0531 18:45:36.723155    8307 addons.go:420] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0531 18:45:36.723336    8307 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0531 18:45:36.723319    8307 addons.go:420] installing /etc/kubernetes/addons/ig-crd.yaml
	I0531 18:45:36.723419    8307 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0531 18:45:36.764383    8307 addons.go:420] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0531 18:45:36.764402    8307 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7741 bytes)
	I0531 18:45:36.771995    8307 addons.go:420] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0531 18:45:36.772015    8307 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0531 18:45:36.801659    8307 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0531 18:45:36.834383    8307 addons.go:420] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0531 18:45:36.834407    8307 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0531 18:45:37.026676    8307 addons.go:420] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0531 18:45:37.026701    8307 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0531 18:45:37.181870    8307 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0531 18:45:38.305720    8307 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.27.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (3.211769898s)
	I0531 18:45:38.305749    8307 start.go:916] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0531 18:45:38.948570    8307 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (3.337703875s)
	I0531 18:45:40.694373    8307 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (4.823809012s)
	I0531 18:45:40.694414    8307 addons.go:464] Verifying addon ingress=true in "addons-748280"
	I0531 18:45:40.696538    8307 out.go:177] * Verifying ingress addon...
	I0531 18:45:40.694597    8307 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (4.804949432s)
	I0531 18:45:40.694726    8307 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.767089735s)
	I0531 18:45:40.694774    8307 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.615309889s)
	I0531 18:45:40.694810    8307 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (4.570880582s)
	I0531 18:45:40.694882    8307 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.550151931s)
	I0531 18:45:40.694997    8307 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (4.085186488s)
	I0531 18:45:40.695051    8307 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (3.893323211s)
	I0531 18:45:40.695237    8307 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (4.473255497s)
	W0531 18:45:40.698396    8307 addons.go:446] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0531 18:45:40.698451    8307 retry.go:31] will retry after 134.338127ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0531 18:45:40.699398    8307 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0531 18:45:40.699653    8307 addons.go:464] Verifying addon registry=true in "addons-748280"
	I0531 18:45:40.702919    8307 out.go:177] * Verifying registry addon...
	I0531 18:45:40.700042    8307 addons.go:464] Verifying addon metrics-server=true in "addons-748280"
	I0531 18:45:40.700911    8307 node_ready.go:35] waiting up to 6m0s for node "addons-748280" to be "Ready" ...
	I0531 18:45:40.705418    8307 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0531 18:45:40.727177    8307 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0531 18:45:40.727207    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:45:40.728985    8307 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I0531 18:45:40.729012    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:45:40.833001    8307 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0531 18:45:41.110433    8307 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (3.928497816s)
	I0531 18:45:41.110506    8307 addons.go:464] Verifying addon csi-hostpath-driver=true in "addons-748280"
	I0531 18:45:41.112636    8307 out.go:177] * Verifying csi-hostpath-driver addon...
	I0531 18:45:41.115915    8307 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0531 18:45:41.151445    8307 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0531 18:45:41.151512    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:45:41.233867    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:45:41.238589    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:45:41.670115    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:45:41.787279    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:45:41.788576    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:45:42.166273    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:45:42.237217    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:45:42.241429    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:45:42.644305    8307 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.811186922s)
	I0531 18:45:42.659322    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:45:42.734239    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:45:42.735504    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:45:42.742402    8307 node_ready.go:58] node "addons-748280" has status "Ready":"False"
	I0531 18:45:43.158489    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:45:43.172762    8307 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0531 18:45:43.172853    8307 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-748280
	I0531 18:45:43.218852    8307 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/16569-2389/.minikube/machines/addons-748280/id_rsa Username:docker}
	I0531 18:45:43.235140    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:45:43.244382    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:45:43.408630    8307 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0531 18:45:43.487002    8307 addons.go:228] Setting addon gcp-auth=true in "addons-748280"
	I0531 18:45:43.487051    8307 host.go:66] Checking if "addons-748280" exists ...
	I0531 18:45:43.487497    8307 cli_runner.go:164] Run: docker container inspect addons-748280 --format={{.State.Status}}
	I0531 18:45:43.516870    8307 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0531 18:45:43.516919    8307 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-748280
	I0531 18:45:43.555943    8307 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/16569-2389/.minikube/machines/addons-748280/id_rsa Username:docker}
	I0531 18:45:43.656802    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:45:43.677857    8307 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20230312-helm-chart-4.5.2-28-g66a760794
	I0531 18:45:43.679856    8307 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.0
	I0531 18:45:43.682015    8307 addons.go:420] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0531 18:45:43.682078    8307 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0531 18:45:43.714093    8307 addons.go:420] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0531 18:45:43.714167    8307 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0531 18:45:43.732623    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:45:43.736905    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:45:43.755717    8307 addons.go:420] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0531 18:45:43.755791    8307 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5474 bytes)
	I0531 18:45:43.787125    8307 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0531 18:45:44.178117    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:45:44.261992    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:45:44.263188    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:45:44.689605    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:45:44.771540    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:45:44.772175    8307 node_ready.go:58] node "addons-748280" has status "Ready":"False"
	I0531 18:45:44.773306    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:45:45.164683    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:45:45.241659    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:45:45.250793    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:45:45.657018    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:45:45.743220    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:45:45.750116    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:45:46.166120    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:45:46.249888    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:45:46.253644    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:45:46.530332    8307 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (2.742831399s)
	I0531 18:45:46.532628    8307 addons.go:464] Verifying addon gcp-auth=true in "addons-748280"
	I0531 18:45:46.535823    8307 out.go:177] * Verifying gcp-auth addon...
	I0531 18:45:46.538218    8307 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0531 18:45:46.546546    8307 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0531 18:45:46.546563    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:45:46.665955    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:45:46.737589    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:45:46.738365    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:45:47.052189    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:45:47.157982    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:45:47.238507    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:45:47.239405    8307 node_ready.go:58] node "addons-748280" has status "Ready":"False"
	I0531 18:45:47.243137    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:45:47.557067    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:45:47.656349    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:45:47.733201    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:45:47.735496    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:45:48.051873    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:45:48.158333    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:45:48.238669    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:45:48.239249    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:45:48.550549    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:45:48.656383    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:45:48.734883    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:45:48.742473    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:45:49.051526    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:45:49.156607    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:45:49.236533    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:45:49.237480    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:45:49.550747    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:45:49.657047    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:45:49.737371    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:45:49.738356    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:45:49.740805    8307 node_ready.go:58] node "addons-748280" has status "Ready":"False"
	I0531 18:45:50.050550    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:45:50.156077    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:45:50.231539    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:45:50.238016    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:45:50.550651    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:45:50.656401    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:45:50.734903    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:45:50.740776    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:45:51.051968    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:45:51.159327    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:45:51.239438    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:45:51.242293    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:45:51.550908    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:45:51.657129    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:45:51.733734    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:45:51.737237    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:45:52.050709    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:45:52.156409    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:45:52.233955    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:45:52.237473    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:45:52.238599    8307 node_ready.go:58] node "addons-748280" has status "Ready":"False"
	I0531 18:45:52.551937    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:45:52.656658    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:45:52.734946    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:45:52.745738    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:45:53.050557    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:45:53.156684    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:45:53.233257    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:45:53.237581    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:45:53.550707    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:45:53.657307    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:45:53.735013    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:45:53.746434    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:45:54.051203    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:45:54.156735    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:45:54.237549    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:45:54.237817    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:45:54.246337    8307 node_ready.go:58] node "addons-748280" has status "Ready":"False"
	I0531 18:45:54.551335    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:45:54.657128    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:45:54.731915    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:45:54.748528    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:45:55.051456    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:45:55.157887    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:45:55.236000    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:45:55.248106    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:45:55.551340    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:45:55.658112    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:45:55.742671    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:45:55.744342    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:45:56.050615    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:45:56.156189    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:45:56.232118    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:45:56.236146    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:45:56.550923    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:45:56.655937    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:45:56.732289    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:45:56.735230    8307 node_ready.go:58] node "addons-748280" has status "Ready":"False"
	I0531 18:45:56.735893    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:45:57.050667    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:45:57.156554    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:45:57.232270    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:45:57.235120    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:45:57.550089    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:45:57.657669    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:45:57.732272    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:45:57.734467    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:45:58.051734    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:45:58.156638    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:45:58.232972    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:45:58.234651    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:45:58.550462    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:45:58.656512    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:45:58.732952    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:45:58.735820    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:45:59.050349    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:45:59.156145    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:45:59.232707    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:45:59.234299    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:45:59.236061    8307 node_ready.go:58] node "addons-748280" has status "Ready":"False"
	I0531 18:45:59.550626    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:45:59.657657    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:45:59.732015    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:45:59.733702    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:46:00.055339    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:46:00.156853    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:46:00.231944    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:46:00.236715    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:46:00.550282    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:46:00.657229    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:46:00.731719    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:46:00.734199    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:46:01.051023    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:46:01.155911    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:46:01.232521    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:46:01.234080    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:46:01.550503    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:46:01.656431    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:46:01.732588    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:46:01.734344    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:46:01.736687    8307 node_ready.go:58] node "addons-748280" has status "Ready":"False"
	I0531 18:46:02.050643    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:46:02.156927    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:46:02.234115    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:46:02.235400    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:46:02.551908    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:46:02.656693    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:46:02.733524    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:46:02.733780    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:46:03.051104    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:46:03.156441    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:46:03.233004    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:46:03.239091    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:46:03.551012    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:46:03.656099    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:46:03.731938    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:46:03.736924    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:46:04.052863    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:46:04.156598    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:46:04.232795    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:46:04.234963    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:46:04.236928    8307 node_ready.go:58] node "addons-748280" has status "Ready":"False"
	I0531 18:46:04.550550    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:46:04.656695    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:46:04.732422    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:46:04.735251    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:46:05.050454    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:46:05.156857    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:46:05.232157    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:46:05.234471    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:46:05.550043    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:46:05.657867    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:46:05.734471    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:46:05.735605    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:46:06.050641    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:46:06.156087    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:46:06.231891    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:46:06.235846    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:46:06.550422    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:46:06.656428    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:46:06.733650    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:46:06.735627    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:46:06.736058    8307 node_ready.go:58] node "addons-748280" has status "Ready":"False"
	I0531 18:46:07.050953    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:46:07.156483    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:46:07.232270    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:46:07.235771    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:46:07.550593    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:46:07.656246    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:46:07.732797    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:46:07.734919    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:46:08.050235    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:46:08.158407    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:46:08.232470    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:46:08.235200    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:46:08.551173    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:46:08.656205    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:46:08.732880    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:46:08.735513    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:46:08.740321    8307 node_ready.go:58] node "addons-748280" has status "Ready":"False"
	I0531 18:46:09.050296    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:46:09.156280    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:46:09.232163    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:46:09.236479    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:46:09.578085    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:46:09.682101    8307 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0531 18:46:09.682128    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:46:09.753869    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:46:09.758724    8307 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0531 18:46:09.758763    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:46:09.763907    8307 node_ready.go:49] node "addons-748280" has status "Ready":"True"
	I0531 18:46:09.763931    8307 node_ready.go:38] duration metric: took 29.060832781s waiting for node "addons-748280" to be "Ready" ...
	I0531 18:46:09.763941    8307 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0531 18:46:09.777484    8307 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5d78c9869d-ctb4p" in "kube-system" namespace to be "Ready" ...
	I0531 18:46:10.079515    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:46:10.175397    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:46:10.234073    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:46:10.238077    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:46:10.551097    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:46:10.659215    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:46:10.732133    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:46:10.735192    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:46:11.053394    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:46:11.164120    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:46:11.243240    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:46:11.245192    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:46:11.551597    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:46:11.691585    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:46:11.740046    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:46:11.755460    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:46:11.809900    8307 pod_ready.go:102] pod "coredns-5d78c9869d-ctb4p" in "kube-system" namespace has status "Ready":"False"
	I0531 18:46:12.051065    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:46:12.157282    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:46:12.244908    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:46:12.245166    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:46:12.563992    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:46:12.660035    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:46:12.736431    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:46:12.738093    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:46:12.803413    8307 pod_ready.go:92] pod "coredns-5d78c9869d-ctb4p" in "kube-system" namespace has status "Ready":"True"
	I0531 18:46:12.803441    8307 pod_ready.go:81] duration metric: took 3.025927803s waiting for pod "coredns-5d78c9869d-ctb4p" in "kube-system" namespace to be "Ready" ...
	I0531 18:46:12.803473    8307 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-748280" in "kube-system" namespace to be "Ready" ...
	I0531 18:46:12.811983    8307 pod_ready.go:92] pod "etcd-addons-748280" in "kube-system" namespace has status "Ready":"True"
	I0531 18:46:12.812016    8307 pod_ready.go:81] duration metric: took 8.52385ms waiting for pod "etcd-addons-748280" in "kube-system" namespace to be "Ready" ...
	I0531 18:46:12.812075    8307 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-748280" in "kube-system" namespace to be "Ready" ...
	I0531 18:46:12.821102    8307 pod_ready.go:92] pod "kube-apiserver-addons-748280" in "kube-system" namespace has status "Ready":"True"
	I0531 18:46:12.821138    8307 pod_ready.go:81] duration metric: took 9.039464ms waiting for pod "kube-apiserver-addons-748280" in "kube-system" namespace to be "Ready" ...
	I0531 18:46:12.821150    8307 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-748280" in "kube-system" namespace to be "Ready" ...
	I0531 18:46:12.828456    8307 pod_ready.go:92] pod "kube-controller-manager-addons-748280" in "kube-system" namespace has status "Ready":"True"
	I0531 18:46:12.828479    8307 pod_ready.go:81] duration metric: took 7.322137ms waiting for pod "kube-controller-manager-addons-748280" in "kube-system" namespace to be "Ready" ...
	I0531 18:46:12.828495    8307 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-k8k6d" in "kube-system" namespace to be "Ready" ...
	I0531 18:46:12.835284    8307 pod_ready.go:92] pod "kube-proxy-k8k6d" in "kube-system" namespace has status "Ready":"True"
	I0531 18:46:12.835307    8307 pod_ready.go:81] duration metric: took 6.805292ms waiting for pod "kube-proxy-k8k6d" in "kube-system" namespace to be "Ready" ...
	I0531 18:46:12.835318    8307 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-748280" in "kube-system" namespace to be "Ready" ...
	I0531 18:46:13.051528    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:46:13.160101    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:46:13.200163    8307 pod_ready.go:92] pod "kube-scheduler-addons-748280" in "kube-system" namespace has status "Ready":"True"
	I0531 18:46:13.200190    8307 pod_ready.go:81] duration metric: took 364.864033ms waiting for pod "kube-scheduler-addons-748280" in "kube-system" namespace to be "Ready" ...
	I0531 18:46:13.200202    8307 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-844d8db974-vjh5j" in "kube-system" namespace to be "Ready" ...
	I0531 18:46:13.234822    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:46:13.239050    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:46:13.551019    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:46:13.658550    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:46:13.735243    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:46:13.739203    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:46:14.055779    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:46:14.159695    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:46:14.241164    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:46:14.249309    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:46:14.553880    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:46:14.677060    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:46:14.740816    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:46:14.741018    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:46:15.058343    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:46:15.183725    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:46:15.240754    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:46:15.243905    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:46:15.551561    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:46:15.611133    8307 pod_ready.go:102] pod "metrics-server-844d8db974-vjh5j" in "kube-system" namespace has status "Ready":"False"
	I0531 18:46:15.659998    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:46:15.737543    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:46:15.738880    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:46:16.063365    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:46:16.125522    8307 pod_ready.go:92] pod "metrics-server-844d8db974-vjh5j" in "kube-system" namespace has status "Ready":"True"
	I0531 18:46:16.125553    8307 pod_ready.go:81] duration metric: took 2.925343008s waiting for pod "metrics-server-844d8db974-vjh5j" in "kube-system" namespace to be "Ready" ...
	I0531 18:46:16.125574    8307 pod_ready.go:38] duration metric: took 6.361621787s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0531 18:46:16.125590    8307 api_server.go:52] waiting for apiserver process to appear ...
	I0531 18:46:16.125667    8307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 18:46:16.141086    8307 api_server.go:72] duration metric: took 39.536120249s to wait for apiserver process to appear ...
	I0531 18:46:16.141155    8307 api_server.go:88] waiting for apiserver healthz status ...
	I0531 18:46:16.141201    8307 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0531 18:46:16.170116    8307 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0531 18:46:16.173939    8307 api_server.go:141] control plane version: v1.27.2
	I0531 18:46:16.174011    8307 api_server.go:131] duration metric: took 32.810734ms to wait for apiserver health ...
	I0531 18:46:16.174033    8307 system_pods.go:43] waiting for kube-system pods to appear ...
	I0531 18:46:16.179486    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:46:16.193847    8307 system_pods.go:59] 17 kube-system pods found
	I0531 18:46:16.193921    8307 system_pods.go:61] "coredns-5d78c9869d-ctb4p" [86b196ff-3fe1-4e1b-baa5-1442e2f87a25] Running
	I0531 18:46:16.193950    8307 system_pods.go:61] "csi-hostpath-attacher-0" [0a38d877-9d43-4001-8be9-5a36ce810f69] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0531 18:46:16.193974    8307 system_pods.go:61] "csi-hostpath-resizer-0" [2fe99aec-2754-4668-8754-d46f38067eb8] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0531 18:46:16.194013    8307 system_pods.go:61] "csi-hostpathplugin-9s7pv" [afdede99-ba58-4f7c-94cc-89e879305e53] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0531 18:46:16.194034    8307 system_pods.go:61] "etcd-addons-748280" [8786107d-987d-4569-8e24-ae449d38c099] Running
	I0531 18:46:16.194069    8307 system_pods.go:61] "kindnet-265l5" [7b84e0aa-879f-4e69-961e-8c4194edd15a] Running
	I0531 18:46:16.194092    8307 system_pods.go:61] "kube-apiserver-addons-748280" [0fca224a-f38d-4262-88b0-2ff337d6f892] Running
	I0531 18:46:16.194113    8307 system_pods.go:61] "kube-controller-manager-addons-748280" [159d31ae-37a4-49b4-8c62-ec30077f09e1] Running
	I0531 18:46:16.194137    8307 system_pods.go:61] "kube-ingress-dns-minikube" [60b6737d-85a4-4be3-a343-c649a32d5573] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0531 18:46:16.194169    8307 system_pods.go:61] "kube-proxy-k8k6d" [756a2b75-7fc4-403d-a71b-951fdaf0092c] Running
	I0531 18:46:16.194193    8307 system_pods.go:61] "kube-scheduler-addons-748280" [8f52249a-2f9c-44a3-8f71-2ba8cf5b3f60] Running
	I0531 18:46:16.194214    8307 system_pods.go:61] "metrics-server-844d8db974-vjh5j" [af201e0f-457a-4fb5-91e6-f01fdfaa6868] Running
	I0531 18:46:16.194238    8307 system_pods.go:61] "registry-6hcmh" [98c2e1ee-6d1b-4140-a410-92c62d5b0c8e] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0531 18:46:16.194272    8307 system_pods.go:61] "registry-proxy-c7bxw" [6510a1f0-5ba2-49e1-8749-6a1b8101c599] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0531 18:46:16.194300    8307 system_pods.go:61] "snapshot-controller-75bbb956b9-b664t" [cff32ef5-f503-4b54-89a5-2fa37f87d544] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0531 18:46:16.194325    8307 system_pods.go:61] "snapshot-controller-75bbb956b9-l8h9p" [4e69779f-d2a9-4e23-949f-626705bea5de] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0531 18:46:16.194348    8307 system_pods.go:61] "storage-provisioner" [6e1cb66c-eee1-4f33-896a-8c80d6c8c213] Running
	I0531 18:46:16.194379    8307 system_pods.go:74] duration metric: took 20.327016ms to wait for pod list to return data ...
	I0531 18:46:16.194407    8307 default_sa.go:34] waiting for default service account to be created ...
	I0531 18:46:16.211502    8307 default_sa.go:45] found service account: "default"
	I0531 18:46:16.211522    8307 default_sa.go:55] duration metric: took 17.098433ms for default service account to be created ...
	I0531 18:46:16.211531    8307 system_pods.go:116] waiting for k8s-apps to be running ...
	I0531 18:46:16.226546    8307 system_pods.go:86] 17 kube-system pods found
	I0531 18:46:16.226638    8307 system_pods.go:89] "coredns-5d78c9869d-ctb4p" [86b196ff-3fe1-4e1b-baa5-1442e2f87a25] Running
	I0531 18:46:16.226663    8307 system_pods.go:89] "csi-hostpath-attacher-0" [0a38d877-9d43-4001-8be9-5a36ce810f69] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0531 18:46:16.226712    8307 system_pods.go:89] "csi-hostpath-resizer-0" [2fe99aec-2754-4668-8754-d46f38067eb8] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0531 18:46:16.226768    8307 system_pods.go:89] "csi-hostpathplugin-9s7pv" [afdede99-ba58-4f7c-94cc-89e879305e53] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0531 18:46:16.226791    8307 system_pods.go:89] "etcd-addons-748280" [8786107d-987d-4569-8e24-ae449d38c099] Running
	I0531 18:46:16.226816    8307 system_pods.go:89] "kindnet-265l5" [7b84e0aa-879f-4e69-961e-8c4194edd15a] Running
	I0531 18:46:16.226852    8307 system_pods.go:89] "kube-apiserver-addons-748280" [0fca224a-f38d-4262-88b0-2ff337d6f892] Running
	I0531 18:46:16.226879    8307 system_pods.go:89] "kube-controller-manager-addons-748280" [159d31ae-37a4-49b4-8c62-ec30077f09e1] Running
	I0531 18:46:16.226904    8307 system_pods.go:89] "kube-ingress-dns-minikube" [60b6737d-85a4-4be3-a343-c649a32d5573] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0531 18:46:16.226939    8307 system_pods.go:89] "kube-proxy-k8k6d" [756a2b75-7fc4-403d-a71b-951fdaf0092c] Running
	I0531 18:46:16.226963    8307 system_pods.go:89] "kube-scheduler-addons-748280" [8f52249a-2f9c-44a3-8f71-2ba8cf5b3f60] Running
	I0531 18:46:16.226987    8307 system_pods.go:89] "metrics-server-844d8db974-vjh5j" [af201e0f-457a-4fb5-91e6-f01fdfaa6868] Running
	I0531 18:46:16.227024    8307 system_pods.go:89] "registry-6hcmh" [98c2e1ee-6d1b-4140-a410-92c62d5b0c8e] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0531 18:46:16.227050    8307 system_pods.go:89] "registry-proxy-c7bxw" [6510a1f0-5ba2-49e1-8749-6a1b8101c599] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0531 18:46:16.227079    8307 system_pods.go:89] "snapshot-controller-75bbb956b9-b664t" [cff32ef5-f503-4b54-89a5-2fa37f87d544] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0531 18:46:16.227122    8307 system_pods.go:89] "snapshot-controller-75bbb956b9-l8h9p" [4e69779f-d2a9-4e23-949f-626705bea5de] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0531 18:46:16.227145    8307 system_pods.go:89] "storage-provisioner" [6e1cb66c-eee1-4f33-896a-8c80d6c8c213] Running
	I0531 18:46:16.227179    8307 system_pods.go:126] duration metric: took 15.642141ms to wait for k8s-apps to be running ...
	I0531 18:46:16.227206    8307 system_svc.go:44] waiting for kubelet service to be running ....
	I0531 18:46:16.227287    8307 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0531 18:46:16.251604    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:46:16.252052    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:46:16.266019    8307 system_svc.go:56] duration metric: took 38.804735ms WaitForService to wait for kubelet.
	I0531 18:46:16.266117    8307 kubeadm.go:581] duration metric: took 39.661154429s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0531 18:46:16.266151    8307 node_conditions.go:102] verifying NodePressure condition ...
	I0531 18:46:16.399440    8307 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0531 18:46:16.399514    8307 node_conditions.go:123] node cpu capacity is 2
	I0531 18:46:16.399539    8307 node_conditions.go:105] duration metric: took 133.367205ms to run NodePressure ...
	I0531 18:46:16.399565    8307 start.go:228] waiting for startup goroutines ...
	I0531 18:46:16.553389    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:46:16.658704    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:46:16.734365    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:46:16.737378    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:46:17.051177    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:46:17.183110    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:46:17.233267    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:46:17.237068    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:46:17.551315    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:46:17.658042    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:46:17.733840    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:46:17.735063    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:46:18.051186    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:46:18.158058    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:46:18.232998    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:46:18.234769    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:46:18.550532    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:46:18.659567    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:46:18.735526    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:46:18.737436    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:46:19.051179    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:46:19.160000    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:46:19.235351    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:46:19.243917    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:46:19.550563    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:46:19.668269    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:46:19.733796    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:46:19.736989    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:46:20.051160    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:46:20.166318    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:46:20.237507    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:46:20.243875    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:46:20.552967    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:46:20.680633    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:46:20.733959    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:46:20.751618    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:46:21.050571    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:46:21.160308    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:46:21.235061    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:46:21.236409    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:46:21.551140    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:46:21.658240    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:46:21.732551    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:46:21.737522    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:46:22.050762    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:46:22.173626    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:46:22.235839    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:46:22.240780    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:46:22.551530    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:46:22.657992    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:46:22.737418    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:46:22.739002    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:46:23.054433    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:46:23.160253    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:46:23.237371    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:46:23.240106    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:46:23.551016    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:46:23.657790    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:46:23.733126    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:46:23.734233    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:46:24.050304    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:46:24.158866    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:46:24.234991    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:46:24.239373    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:46:24.550943    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:46:24.659213    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:46:24.734040    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:46:24.737474    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:46:25.051564    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:46:25.159341    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:46:25.232972    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:46:25.236106    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:46:25.551987    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:46:25.663678    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:46:25.734074    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:46:25.735497    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:46:26.050774    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:46:26.157484    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:46:26.232957    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:46:26.234932    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:46:26.555240    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:46:26.657702    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:46:26.732109    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:46:26.734871    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:46:27.050816    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:46:27.161727    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:46:27.233765    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:46:27.236094    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:46:27.571929    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:46:27.658538    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:46:27.738056    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:46:27.742304    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:46:28.053089    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:46:28.157431    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:46:28.233128    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:46:28.235370    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:46:28.557725    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:46:28.659333    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:46:28.736113    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:46:28.736974    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:46:29.050286    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:46:29.159577    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:46:29.233167    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:46:29.234830    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:46:29.551546    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:46:29.662858    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:46:29.732865    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:46:29.741778    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:46:30.051128    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:46:30.158918    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:46:30.234392    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:46:30.235004    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:46:30.563404    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:46:30.659273    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:46:30.732577    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:46:30.735690    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:46:31.061799    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:46:31.160162    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:46:31.233787    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:46:31.239814    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:46:31.564743    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:46:31.659312    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:46:31.734853    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:46:31.735911    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:46:32.051122    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:46:32.157418    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:46:32.232792    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:46:32.234958    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:46:32.562647    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:46:32.657454    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:46:32.732467    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:46:32.735730    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:46:33.051290    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:46:33.159188    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:46:33.233996    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:46:33.235700    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:46:33.551637    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:46:33.658205    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:46:33.734094    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:46:33.739091    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:46:34.056308    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:46:34.160029    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:46:34.235193    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:46:34.237328    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:46:34.552462    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:46:34.661315    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:46:34.736594    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:46:34.739509    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:46:35.058771    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:46:35.157932    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:46:35.235006    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:46:35.237145    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:46:35.551212    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:46:35.659222    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:46:35.734392    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:46:35.735376    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:46:36.050420    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:46:36.157150    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:46:36.232279    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:46:36.233969    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:46:36.552806    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:46:36.657397    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:46:36.732451    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:46:36.734419    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:46:37.050235    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:46:37.160711    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:46:37.236046    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:46:37.236605    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:46:37.552859    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:46:37.661128    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:46:37.733802    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:46:37.735992    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:46:38.052289    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:46:38.158373    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:46:38.234299    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:46:38.247583    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:46:38.551776    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:46:38.660588    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:46:38.739621    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:46:38.741467    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:46:39.051057    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:46:39.157925    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:46:39.234321    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:46:39.238121    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:46:39.551629    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:46:39.658440    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:46:39.738175    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:46:39.743631    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:46:40.050920    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:46:40.160331    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:46:40.237232    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:46:40.243428    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:46:40.555471    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:46:40.663499    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:46:40.763284    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:46:40.764712    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:46:41.050299    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:46:41.158779    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:46:41.235302    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:46:41.235543    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:46:41.551836    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:46:41.676907    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:46:41.737050    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:46:41.740068    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:46:42.053359    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:46:42.158060    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:46:42.234829    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:46:42.235610    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:46:42.550639    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:46:42.665889    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:46:42.735436    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:46:42.737664    8307 kapi.go:107] duration metric: took 1m2.032236678s to wait for kubernetes.io/minikube-addons=registry ...
	I0531 18:46:43.051537    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:46:43.157421    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:46:43.232914    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:46:43.551207    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:46:43.665675    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:46:43.733001    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:46:44.051109    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:46:44.157930    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:46:44.232722    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:46:44.550673    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:46:44.661695    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:46:44.732264    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:46:45.051711    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:46:45.229756    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:46:45.238779    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:46:45.556867    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:46:45.661534    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:46:45.732365    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:46:46.050564    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:46:46.160027    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:46:46.232555    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:46:46.555709    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:46:46.657509    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:46:46.732664    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:46:47.050863    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:46:47.158041    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:46:47.232611    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:46:47.551129    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:46:47.658262    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:46:47.732579    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:46:48.051233    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:46:48.159520    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:46:48.233564    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:46:48.551505    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:46:48.658796    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:46:48.736695    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:46:49.051314    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:46:49.158701    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:46:49.232895    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:46:49.552543    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:46:49.669533    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:46:49.732530    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:46:50.055478    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:46:50.158677    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:46:50.232192    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:46:50.551419    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:46:50.657979    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:46:50.736617    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:46:51.052102    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:46:51.158316    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:46:51.234123    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:46:51.552673    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:46:51.659619    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:46:51.732450    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:46:52.055569    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:46:52.164012    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:46:52.232952    8307 kapi.go:107] duration metric: took 1m11.533548509s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0531 18:46:52.551275    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:46:52.659675    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:46:53.050135    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:46:53.159574    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:46:53.550256    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:46:53.659122    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:46:54.051221    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:46:54.157677    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:46:54.552445    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:46:54.660638    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:46:55.060952    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:46:55.158617    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:46:55.551250    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:46:55.658225    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:46:56.051879    8307 kapi.go:107] duration metric: took 1m9.513662148s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0531 18:46:56.053830    8307 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-748280 cluster.
	I0531 18:46:56.056114    8307 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0531 18:46:56.058020    8307 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0531 18:46:56.157494    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:46:56.660134    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:46:57.157309    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:46:57.656985    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:46:58.157335    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:46:58.658016    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:46:59.162343    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:46:59.657376    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:47:00.157557    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:47:00.661472    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:47:01.158087    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:47:01.657941    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:47:02.158800    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:47:02.665512    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:47:03.158871    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:47:03.658442    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:47:04.157700    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:47:04.657756    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:47:05.163217    8307 kapi.go:107] duration metric: took 1m24.047306906s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0531 18:47:05.165252    8307 out.go:177] * Enabled addons: cloud-spanner, ingress-dns, storage-provisioner, default-storageclass, inspektor-gadget, metrics-server, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I0531 18:47:05.167410    8307 addons.go:499] enable addons completed in 1m30.445258386s: enabled=[cloud-spanner ingress-dns storage-provisioner default-storageclass inspektor-gadget metrics-server volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I0531 18:47:05.167465    8307 start.go:233] waiting for cluster config update ...
	I0531 18:47:05.167488    8307 start.go:242] writing updated cluster config ...
	I0531 18:47:05.167807    8307 ssh_runner.go:195] Run: rm -f paused
	I0531 18:47:05.230133    8307 start.go:573] kubectl: 1.27.2, cluster: 1.27.2 (minor skew: 0)
	I0531 18:47:05.232563    8307 out.go:177] * Done! kubectl is now configured to use "addons-748280" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* May 31 18:50:05 addons-748280 crio[891]: time="2023-05-31 18:50:05.680056644Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:13753a81eccfdd153bf7fc9a4c9198edbcce0110e7f46ed0d38cc654a6458ff5,RepoTags:[gcr.io/google-samples/hello-app:1.0],RepoDigests:[gcr.io/google-samples/hello-app@sha256:845f77fab71033404f4cfceaa1ddb27b70c3551ceb22a5e7f4498cdda6c9daea],Size_:28496999,Uid:nil,Username:nonroot,Spec:nil,},Info:map[string]string{},}" id=2c531119-7ac5-44eb-a77d-79376dae21f0 name=/runtime.v1.ImageService/ImageStatus
	May 31 18:50:05 addons-748280 crio[891]: time="2023-05-31 18:50:05.681885273Z" level=info msg="Checking image status: gcr.io/google-samples/hello-app:1.0" id=6079ad74-666e-49e9-ae42-1aba6ba31e24 name=/runtime.v1.ImageService/ImageStatus
	May 31 18:50:05 addons-748280 crio[891]: time="2023-05-31 18:50:05.682544115Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:13753a81eccfdd153bf7fc9a4c9198edbcce0110e7f46ed0d38cc654a6458ff5,RepoTags:[gcr.io/google-samples/hello-app:1.0],RepoDigests:[gcr.io/google-samples/hello-app@sha256:845f77fab71033404f4cfceaa1ddb27b70c3551ceb22a5e7f4498cdda6c9daea],Size_:28496999,Uid:nil,Username:nonroot,Spec:nil,},Info:map[string]string{},}" id=6079ad74-666e-49e9-ae42-1aba6ba31e24 name=/runtime.v1.ImageService/ImageStatus
	May 31 18:50:05 addons-748280 crio[891]: time="2023-05-31 18:50:05.683482487Z" level=info msg="Creating container: default/hello-world-app-65bdb79f98-vk6p5/hello-world-app" id=7f2ecd7e-b21f-479f-8b19-827ce81a039c name=/runtime.v1.RuntimeService/CreateContainer
	May 31 18:50:05 addons-748280 crio[891]: time="2023-05-31 18:50:05.683570634Z" level=warning msg="Allowed annotations are specified for workload []"
	May 31 18:50:05 addons-748280 crio[891]: time="2023-05-31 18:50:05.843601334Z" level=info msg="Created container cd102ec51f95dcd359dba6b98340baf6d272343af490556e7b5c02bffd66fcf1: default/hello-world-app-65bdb79f98-vk6p5/hello-world-app" id=7f2ecd7e-b21f-479f-8b19-827ce81a039c name=/runtime.v1.RuntimeService/CreateContainer
	May 31 18:50:05 addons-748280 crio[891]: time="2023-05-31 18:50:05.844449836Z" level=info msg="Starting container: cd102ec51f95dcd359dba6b98340baf6d272343af490556e7b5c02bffd66fcf1" id=41233d51-2baa-4125-986b-783a7d7d133f name=/runtime.v1.RuntimeService/StartContainer
	May 31 18:50:05 addons-748280 conmon[6832]: conmon cd102ec51f95dcd359db <ninfo>: container 6844 exited with status 1
	May 31 18:50:05 addons-748280 crio[891]: time="2023-05-31 18:50:05.866167325Z" level=info msg="Started container" PID=6844 containerID=cd102ec51f95dcd359dba6b98340baf6d272343af490556e7b5c02bffd66fcf1 description=default/hello-world-app-65bdb79f98-vk6p5/hello-world-app id=41233d51-2baa-4125-986b-783a7d7d133f name=/runtime.v1.RuntimeService/StartContainer sandboxID=bbfaee9e4cdd6a2acb211a89a5a161c75ee1deeb4791e83b97726a5243baa9f4
	May 31 18:50:06 addons-748280 crio[891]: time="2023-05-31 18:50:06.189283241Z" level=warning msg="Stopping container 1c00973904c5ede87fce8ac120682e95b3eaa38a80b14f6ab0d9a0e22f4cf075 with stop signal timed out: timeout reached after 1 seconds waiting for container process to exit" id=3b671984-d418-4be8-a9e1-f8fb3d5f4194 name=/runtime.v1.RuntimeService/StopContainer
	May 31 18:50:06 addons-748280 conmon[4450]: conmon 1c00973904c5ede87fce <ninfo>: container 4461 exited with status 137
	May 31 18:50:06 addons-748280 crio[891]: time="2023-05-31 18:50:06.344977648Z" level=info msg="Stopped container 1c00973904c5ede87fce8ac120682e95b3eaa38a80b14f6ab0d9a0e22f4cf075: ingress-nginx/ingress-nginx-controller-858bcd4f57-76948/controller" id=3b671984-d418-4be8-a9e1-f8fb3d5f4194 name=/runtime.v1.RuntimeService/StopContainer
	May 31 18:50:06 addons-748280 crio[891]: time="2023-05-31 18:50:06.345520905Z" level=info msg="Stopping pod sandbox: afa87f5028abec5fe6fa9b464bc7417d8e5b496dd39a94244c12fbed4562e6e7" id=027c242e-6df3-4244-9f4e-c9d7432b0617 name=/runtime.v1.RuntimeService/StopPodSandbox
	May 31 18:50:06 addons-748280 crio[891]: time="2023-05-31 18:50:06.349100471Z" level=info msg="Restoring iptables rules: *nat\n:KUBE-HP-WCQPKGOSSSVBHIFH - [0:0]\n:KUBE-HP-WZVDBHUK6DFET5EK - [0:0]\n:KUBE-HOSTPORTS - [0:0]\n-X KUBE-HP-WZVDBHUK6DFET5EK\n-X KUBE-HP-WCQPKGOSSSVBHIFH\nCOMMIT\n"
	May 31 18:50:06 addons-748280 crio[891]: time="2023-05-31 18:50:06.350716704Z" level=info msg="Closing host port tcp:80"
	May 31 18:50:06 addons-748280 crio[891]: time="2023-05-31 18:50:06.350796268Z" level=info msg="Closing host port tcp:443"
	May 31 18:50:06 addons-748280 crio[891]: time="2023-05-31 18:50:06.352490565Z" level=info msg="Host port tcp:80 does not have an open socket"
	May 31 18:50:06 addons-748280 crio[891]: time="2023-05-31 18:50:06.352526962Z" level=info msg="Host port tcp:443 does not have an open socket"
	May 31 18:50:06 addons-748280 crio[891]: time="2023-05-31 18:50:06.352717123Z" level=info msg="Got pod network &{Name:ingress-nginx-controller-858bcd4f57-76948 Namespace:ingress-nginx ID:afa87f5028abec5fe6fa9b464bc7417d8e5b496dd39a94244c12fbed4562e6e7 UID:493fae28-0951-494b-9648-19fa4d464abb NetNS:/var/run/netns/80711605-d425-4d7c-b717-9dad3827eb86 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	May 31 18:50:06 addons-748280 crio[891]: time="2023-05-31 18:50:06.352865232Z" level=info msg="Deleting pod ingress-nginx_ingress-nginx-controller-858bcd4f57-76948 from CNI network \"kindnet\" (type=ptp)"
	May 31 18:50:06 addons-748280 crio[891]: time="2023-05-31 18:50:06.371247584Z" level=info msg="Stopped pod sandbox: afa87f5028abec5fe6fa9b464bc7417d8e5b496dd39a94244c12fbed4562e6e7" id=027c242e-6df3-4244-9f4e-c9d7432b0617 name=/runtime.v1.RuntimeService/StopPodSandbox
	May 31 18:50:06 addons-748280 crio[891]: time="2023-05-31 18:50:06.389202928Z" level=info msg="Removing container: 4a2cbe80304edd4ad97fc3f4b969c445e84300b47b8690a3c8e492a701ee5ee8" id=936a6953-583e-4a20-8a20-36ee48ac81dd name=/runtime.v1.RuntimeService/RemoveContainer
	May 31 18:50:06 addons-748280 crio[891]: time="2023-05-31 18:50:06.427715623Z" level=info msg="Removed container 4a2cbe80304edd4ad97fc3f4b969c445e84300b47b8690a3c8e492a701ee5ee8: default/hello-world-app-65bdb79f98-vk6p5/hello-world-app" id=936a6953-583e-4a20-8a20-36ee48ac81dd name=/runtime.v1.RuntimeService/RemoveContainer
	May 31 18:50:06 addons-748280 crio[891]: time="2023-05-31 18:50:06.429152838Z" level=info msg="Removing container: 1c00973904c5ede87fce8ac120682e95b3eaa38a80b14f6ab0d9a0e22f4cf075" id=51b08f96-70af-4b57-ab33-68f21a485cc6 name=/runtime.v1.RuntimeService/RemoveContainer
	May 31 18:50:06 addons-748280 crio[891]: time="2023-05-31 18:50:06.456611491Z" level=info msg="Removed container 1c00973904c5ede87fce8ac120682e95b3eaa38a80b14f6ab0d9a0e22f4cf075: ingress-nginx/ingress-nginx-controller-858bcd4f57-76948/controller" id=51b08f96-70af-4b57-ab33-68f21a485cc6 name=/runtime.v1.RuntimeService/RemoveContainer
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                                                        CREATED             STATE               NAME                                     ATTEMPT             POD ID              POD
	cd102ec51f95d       13753a81eccfdd153bf7fc9a4c9198edbcce0110e7f46ed0d38cc654a6458ff5                                                                             7 seconds ago       Exited              hello-world-app                          2                   bbfaee9e4cdd6       hello-world-app-65bdb79f98-vk6p5
	e002ec67e7538       docker.io/library/nginx@sha256:203cba3f56d7dba1d66b95c091db65a4f0778eb5d16e76151e73e0413e317328                                              2 minutes ago       Running             nginx                                    0                   93b4f1bcdc79b       nginx
	2bee07dab41d4       registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f                          3 minutes ago       Running             csi-snapshotter                          0                   bcb3d8b8be486       csi-hostpathplugin-9s7pv
	2548c3391ee86       registry.k8s.io/sig-storage/csi-provisioner@sha256:98ffd09c0784203d200e0f8c241501de31c8df79644caac7eed61bd6391e5d49                          3 minutes ago       Running             csi-provisioner                          0                   bcb3d8b8be486       csi-hostpathplugin-9s7pv
	db2d7303d3a79       registry.k8s.io/sig-storage/livenessprobe@sha256:8b00c6e8f52639ed9c6f866085893ab688e57879741b3089e3cfa9998502e158                            3 minutes ago       Running             liveness-probe                           0                   bcb3d8b8be486       csi-hostpathplugin-9s7pv
	6d550170d08c3       registry.k8s.io/sig-storage/hostpathplugin@sha256:7b1dfc90a367222067fc468442fdf952e20fc5961f25c1ad654300ddc34d7083                           3 minutes ago       Running             hostpath                                 0                   bcb3d8b8be486       csi-hostpathplugin-9s7pv
	e045b0df91edc       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:63b520448091bc94aa4dba00d6b3b3c25e410c4fb73aa46feae5b25f9895abaa                                 3 minutes ago       Running             gcp-auth                                 0                   ff0fce1de8b98       gcp-auth-58478865f7-tw9f8
	8ef06e620e95e       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:511b8c8ac828194a753909d26555ff08bc12f497dd8daeb83fe9d593693a26c1                3 minutes ago       Running             node-driver-registrar                    0                   bcb3d8b8be486       csi-hostpathplugin-9s7pv
	c98b1fa476418       registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7                              3 minutes ago       Running             csi-resizer                              0                   c9d43ab51cc2d       csi-hostpath-resizer-0
	1cea1a842f042       registry.k8s.io/sig-storage/csi-attacher@sha256:4b5609c78455de45821910065281a368d5f760b41250f90cbde5110543bdc326                             3 minutes ago       Running             csi-attacher                             0                   09c0bc5b8fabc       csi-hostpath-attacher-0
	406a88cc1870b       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      3 minutes ago       Running             volume-snapshot-controller               0                   768e636d764b2       snapshot-controller-75bbb956b9-l8h9p
	a60430cca9cf2       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:01d181618f270f2a96c04006f33b2699ad3ccb02da48d0f89b22abce084b292f                   3 minutes ago       Exited              patch                                    0                   2e99185a4db5d       ingress-nginx-admission-patch-l59sb
	46250d9e6cdcb       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:80b9ba94aa2afe24553d69bd165a6a51552d1582d68618ec00d3b804a7d9193c   3 minutes ago       Running             csi-external-health-monitor-controller   0                   bcb3d8b8be486       csi-hostpathplugin-9s7pv
	683dddb99a5ab       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      3 minutes ago       Running             volume-snapshot-controller               0                   fbdd40d7cba51       snapshot-controller-75bbb956b9-b664t
	657cbe8891ab8       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:01d181618f270f2a96c04006f33b2699ad3ccb02da48d0f89b22abce084b292f                   3 minutes ago       Exited              create                                   0                   6913dae0dfe5b       ingress-nginx-admission-create-c4nzk
	7497298e3c635       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108                                                                             4 minutes ago       Running             coredns                                  0                   464dd0ad42868       coredns-5d78c9869d-ctb4p
	2b333ed8ee445       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                                             4 minutes ago       Running             storage-provisioner                      0                   3864ceca696e1       storage-provisioner
	7101bc75e91f2       b18bf71b941bae2e12db1c07e567ad14e4febbc778310a0fc64487f1ac877d79                                                                             4 minutes ago       Running             kindnet-cni                              0                   ae4f61d0fc6b4       kindnet-265l5
	6a504f33e4da1       29921a084542255eb81a1a660a603b1a24636d88b202f9010daae75fa32754c0                                                                             4 minutes ago       Running             kube-proxy                               0                   8ebd4a15353d6       kube-proxy-k8k6d
	ab2df087bab58       24bc64e911039ecf00e263be2161797c758b7d82403ca5516ab64047a477f737                                                                             4 minutes ago       Running             etcd                                     0                   c6ce6c07026b8       etcd-addons-748280
	622e30ad34794       305d7ed1dae2877c3a80d434c5fb9f1aac1aa3d2431c36130a3fcd1970e93840                                                                             4 minutes ago       Running             kube-scheduler                           0                   9ff0f008338df       kube-scheduler-addons-748280
	b2472317bc595       72c9df6be7f1b997e4a31b5cb9aa7262e5278905af97e6a69e341e3f0f9bbaae                                                                             4 minutes ago       Running             kube-apiserver                           0                   e37e812dfa950       kube-apiserver-addons-748280
	12917d226c926       2ee705380c3c59a538b853cbe9ae9886ebbd0001a4cea4add5adeea48e5f48d4                                                                             4 minutes ago       Running             kube-controller-manager                  0                   521b34c7ecfa6       kube-controller-manager-addons-748280
	
	* 
	* ==> coredns [7497298e3c6352e8d10edb8ba8b599bab7afb40455b793ac2276c6c6363b4e7a] <==
	* [INFO] 10.244.0.16:37729 - 50460 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000104401s
	[INFO] 10.244.0.16:37729 - 45711 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.002780909s
	[INFO] 10.244.0.16:39281 - 26161 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.003355305s
	[INFO] 10.244.0.16:39281 - 1392 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.00310898s
	[INFO] 10.244.0.16:37729 - 37566 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.003249845s
	[INFO] 10.244.0.16:39281 - 11750 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000126793s
	[INFO] 10.244.0.16:37729 - 20904 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000248228s
	[INFO] 10.244.0.16:56070 - 563 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000113344s
	[INFO] 10.244.0.16:56070 - 1390 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000073796s
	[INFO] 10.244.0.16:56070 - 14017 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000057903s
	[INFO] 10.244.0.16:56070 - 41081 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000043774s
	[INFO] 10.244.0.16:56070 - 30076 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000060898s
	[INFO] 10.244.0.16:56070 - 52138 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000041936s
	[INFO] 10.244.0.16:56070 - 53464 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001820991s
	[INFO] 10.244.0.16:56070 - 3888 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001032787s
	[INFO] 10.244.0.16:56070 - 7291 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.00006939s
	[INFO] 10.244.0.16:35020 - 40580 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.00016388s
	[INFO] 10.244.0.16:35020 - 46634 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000104967s
	[INFO] 10.244.0.16:35020 - 16568 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000071901s
	[INFO] 10.244.0.16:35020 - 63335 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000050125s
	[INFO] 10.244.0.16:35020 - 47367 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000037957s
	[INFO] 10.244.0.16:35020 - 6719 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.00003991s
	[INFO] 10.244.0.16:35020 - 55846 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001063811s
	[INFO] 10.244.0.16:35020 - 30773 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.000924784s
	[INFO] 10.244.0.16:35020 - 1656 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000054998s
	
	* 
	* ==> describe nodes <==
	* Name:               addons-748280
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-748280
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7022875d4a054c2d518e5e5a7b9d500799d50140
	                    minikube.k8s.io/name=addons-748280
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_05_31T18_45_22_0700
	                    minikube.k8s.io/version=v1.30.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-748280
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-748280"}
	                    kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 31 May 2023 18:45:18 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-748280
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 31 May 2023 18:50:08 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 31 May 2023 18:49:57 +0000   Wed, 31 May 2023 18:45:14 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 31 May 2023 18:49:57 +0000   Wed, 31 May 2023 18:45:14 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 31 May 2023 18:49:57 +0000   Wed, 31 May 2023 18:45:14 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 31 May 2023 18:49:57 +0000   Wed, 31 May 2023 18:46:09 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-748280
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022624Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022624Ki
	  pods:               110
	System Info:
	  Machine ID:                 f0fd9b7bb62247f4b43374d029771549
	  System UUID:                01389d32-e699-4c5e-890c-7ff02ae10f68
	  Boot ID:                    35429d0f-2ece-432d-a992-d9f8cda99d9c
	  Kernel Version:             5.15.0-1037-aws
	  OS Image:                   Ubuntu 22.04.2 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.5
	  Kubelet Version:            v1.27.2
	  Kube-Proxy Version:         v1.27.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (17 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-65bdb79f98-vk6p5         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         26s
	  default                     nginx                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m45s
	  default                     task-pv-pod                              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         0s
	  gcp-auth                    gcp-auth-58478865f7-tw9f8                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m27s
	  kube-system                 coredns-5d78c9869d-ctb4p                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     4m38s
	  kube-system                 csi-hostpath-attacher-0                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m33s
	  kube-system                 csi-hostpath-resizer-0                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m32s
	  kube-system                 csi-hostpathplugin-9s7pv                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m4s
	  kube-system                 etcd-addons-748280                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         4m52s
	  kube-system                 kindnet-265l5                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      4m39s
	  kube-system                 kube-apiserver-addons-748280             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m51s
	  kube-system                 kube-controller-manager-addons-748280    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m53s
	  kube-system                 kube-proxy-k8k6d                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m39s
	  kube-system                 kube-scheduler-addons-748280             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m53s
	  kube-system                 snapshot-controller-75bbb956b9-b664t     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m33s
	  kube-system                 snapshot-controller-75bbb956b9-l8h9p     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m33s
	  kube-system                 storage-provisioner                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m33s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%!)(MISSING)  100m (5%!)(MISSING)
	  memory             220Mi (2%!)(MISSING)  220Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age              From             Message
	  ----    ------                   ----             ----             -------
	  Normal  Starting                 4m33s            kube-proxy       
	  Normal  NodeHasSufficientMemory  5m (x8 over 5m)  kubelet          Node addons-748280 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m (x8 over 5m)  kubelet          Node addons-748280 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m (x8 over 5m)  kubelet          Node addons-748280 status is now: NodeHasSufficientPID
	  Normal  Starting                 4m52s            kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m52s            kubelet          Node addons-748280 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m52s            kubelet          Node addons-748280 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m52s            kubelet          Node addons-748280 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           4m40s            node-controller  Node addons-748280 event: Registered Node addons-748280 in Controller
	  Normal  NodeReady                4m4s             kubelet          Node addons-748280 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [May31 18:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014643] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +1.239601] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.408914] kauditd_printk_skb: 26 callbacks suppressed
	
	* 
	* ==> etcd [ab2df087bab58700b57612f6f58877865e5caf7ca986cb74a316f1322c67b0c1] <==
	* {"level":"info","ts":"2023-05-31T18:45:14.359Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-05-31T18:45:14.360Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2023-05-31T18:45:14.360Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-05-31T18:45:14.360Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-05-31T18:45:14.370Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-05-31T18:45:14.390Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-05-31T18:45:14.390Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-05-31T18:45:35.501Z","caller":"traceutil/trace.go:171","msg":"trace[2077223569] transaction","detail":"{read_only:false; response_revision:375; number_of_response:1; }","duration":"289.42364ms","start":"2023-05-31T18:45:35.212Z","end":"2023-05-31T18:45:35.501Z","steps":["trace[2077223569] 'process raft request'  (duration: 279.185869ms)"],"step_count":1}
	{"level":"info","ts":"2023-05-31T18:45:35.504Z","caller":"traceutil/trace.go:171","msg":"trace[1356551414] transaction","detail":"{read_only:false; response_revision:376; number_of_response:1; }","duration":"131.286992ms","start":"2023-05-31T18:45:35.373Z","end":"2023-05-31T18:45:35.504Z","steps":["trace[1356551414] 'process raft request'  (duration: 130.988172ms)"],"step_count":1}
	{"level":"info","ts":"2023-05-31T18:45:35.505Z","caller":"traceutil/trace.go:171","msg":"trace[1586516997] transaction","detail":"{read_only:false; response_revision:377; number_of_response:1; }","duration":"117.759206ms","start":"2023-05-31T18:45:35.387Z","end":"2023-05-31T18:45:35.505Z","steps":["trace[1586516997] 'process raft request'  (duration: 117.354984ms)"],"step_count":1}
	{"level":"info","ts":"2023-05-31T18:45:35.505Z","caller":"traceutil/trace.go:171","msg":"trace[314550480] transaction","detail":"{read_only:false; response_revision:378; number_of_response:1; }","duration":"115.594424ms","start":"2023-05-31T18:45:35.389Z","end":"2023-05-31T18:45:35.505Z","steps":["trace[314550480] 'process raft request'  (duration: 115.019471ms)"],"step_count":1}
	{"level":"info","ts":"2023-05-31T18:45:35.507Z","caller":"traceutil/trace.go:171","msg":"trace[1362284431] transaction","detail":"{read_only:false; response_revision:379; number_of_response:1; }","duration":"117.257844ms","start":"2023-05-31T18:45:35.389Z","end":"2023-05-31T18:45:35.507Z","steps":["trace[1362284431] 'process raft request'  (duration: 114.86456ms)"],"step_count":1}
	{"level":"warn","ts":"2023-05-31T18:45:38.038Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"100.828422ms","expected-duration":"100ms","prefix":"","request":"header:<ID:8128021455551041720 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/replicasets/default/cloud-spanner-emulator-6964794569\" mod_revision:0 > success:<request_put:<key:\"/registry/replicasets/default/cloud-spanner-emulator-6964794569\" value_size:1985 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2023-05-31T18:45:38.042Z","caller":"traceutil/trace.go:171","msg":"trace[443635447] transaction","detail":"{read_only:false; response_revision:406; number_of_response:1; }","duration":"158.535949ms","start":"2023-05-31T18:45:37.884Z","end":"2023-05-31T18:45:38.042Z","steps":["trace[443635447] 'process raft request'  (duration: 52.86102ms)","trace[443635447] 'compare'  (duration: 95.989235ms)"],"step_count":2}
	{"level":"info","ts":"2023-05-31T18:45:38.043Z","caller":"traceutil/trace.go:171","msg":"trace[885329116] linearizableReadLoop","detail":"{readStateIndex:420; appliedIndex:419; }","duration":"158.52886ms","start":"2023-05-31T18:45:37.884Z","end":"2023-05-31T18:45:38.043Z","steps":["trace[885329116] 'read index received'  (duration: 52.42388ms)","trace[885329116] 'applied index is now lower than readState.Index'  (duration: 106.104143ms)"],"step_count":2}
	{"level":"warn","ts":"2023-05-31T18:45:38.050Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"165.621345ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/apiextensions.k8s.io/customresourcedefinitions/\" range_end:\"/registry/apiextensions.k8s.io/customresourcedefinitions0\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-05-31T18:45:38.051Z","caller":"traceutil/trace.go:171","msg":"trace[333068569] range","detail":"{range_begin:/registry/apiextensions.k8s.io/customresourcedefinitions/; range_end:/registry/apiextensions.k8s.io/customresourcedefinitions0; response_count:0; response_revision:406; }","duration":"167.12141ms","start":"2023-05-31T18:45:37.884Z","end":"2023-05-31T18:45:38.051Z","steps":["trace[333068569] 'agreement among raft nodes before linearized reading'  (duration: 165.156627ms)"],"step_count":1}
	{"level":"info","ts":"2023-05-31T18:45:38.053Z","caller":"traceutil/trace.go:171","msg":"trace[2099997861] transaction","detail":"{read_only:false; response_revision:407; number_of_response:1; }","duration":"115.544522ms","start":"2023-05-31T18:45:37.937Z","end":"2023-05-31T18:45:38.053Z","steps":["trace[2099997861] 'process raft request'  (duration: 105.338562ms)"],"step_count":1}
	{"level":"info","ts":"2023-05-31T18:45:38.066Z","caller":"traceutil/trace.go:171","msg":"trace[628386089] transaction","detail":"{read_only:false; response_revision:408; number_of_response:1; }","duration":"128.206321ms","start":"2023-05-31T18:45:37.938Z","end":"2023-05-31T18:45:38.066Z","steps":["trace[628386089] 'process raft request'  (duration: 114.623791ms)"],"step_count":1}
	{"level":"warn","ts":"2023-05-31T18:45:38.216Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"106.955755ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/ranges/serviceips\" ","response":"range_response_count:1 size:116"}
	{"level":"info","ts":"2023-05-31T18:45:38.216Z","caller":"traceutil/trace.go:171","msg":"trace[542074782] range","detail":"{range_begin:/registry/ranges/serviceips; range_end:; response_count:1; response_revision:411; }","duration":"107.035163ms","start":"2023-05-31T18:45:38.109Z","end":"2023-05-31T18:45:38.216Z","steps":["trace[542074782] 'agreement among raft nodes before linearized reading'  (duration: 106.865926ms)"],"step_count":1}
	{"level":"info","ts":"2023-05-31T18:45:38.217Z","caller":"traceutil/trace.go:171","msg":"trace[1712723025] transaction","detail":"{read_only:false; response_revision:411; number_of_response:1; }","duration":"107.487213ms","start":"2023-05-31T18:45:38.109Z","end":"2023-05-31T18:45:38.217Z","steps":["trace[1712723025] 'process raft request'  (duration: 38.56973ms)","trace[1712723025] 'compare'  (duration: 68.378378ms)"],"step_count":2}
	{"level":"info","ts":"2023-05-31T18:45:38.441Z","caller":"traceutil/trace.go:171","msg":"trace[5844250] transaction","detail":"{read_only:false; response_revision:415; number_of_response:1; }","duration":"153.985113ms","start":"2023-05-31T18:45:38.287Z","end":"2023-05-31T18:45:38.441Z","steps":["trace[5844250] 'process raft request'  (duration: 153.344519ms)"],"step_count":1}
	{"level":"info","ts":"2023-05-31T18:45:38.442Z","caller":"traceutil/trace.go:171","msg":"trace[1111194169] transaction","detail":"{read_only:false; response_revision:414; number_of_response:1; }","duration":"154.62674ms","start":"2023-05-31T18:45:38.287Z","end":"2023-05-31T18:45:38.442Z","steps":["trace[1111194169] 'process raft request'  (duration: 141.402836ms)","trace[1111194169] 'compare'  (duration: 11.64516ms)"],"step_count":2}
	{"level":"info","ts":"2023-05-31T18:45:38.499Z","caller":"traceutil/trace.go:171","msg":"trace[1078383080] transaction","detail":"{read_only:false; response_revision:417; number_of_response:1; }","duration":"115.21646ms","start":"2023-05-31T18:45:38.383Z","end":"2023-05-31T18:45:38.499Z","steps":["trace[1078383080] 'process raft request'  (duration: 67.6936ms)"],"step_count":1}
	
	* 
	* ==> gcp-auth [e045b0df91edc70792695e93ec373cac43b8d968e888050834ff77a8494fa801] <==
	* 2023/05/31 18:46:55 GCP Auth Webhook started!
	2023/05/31 18:47:15 Ready to marshal response ...
	2023/05/31 18:47:15 Ready to write response ...
	2023/05/31 18:47:28 Ready to marshal response ...
	2023/05/31 18:47:28 Ready to write response ...
	2023/05/31 18:49:47 Ready to marshal response ...
	2023/05/31 18:49:47 Ready to write response ...
	2023/05/31 18:50:13 Ready to marshal response ...
	2023/05/31 18:50:13 Ready to write response ...
	
	* 
	* ==> kernel <==
	*  18:50:13 up 32 min,  0 users,  load average: 0.24, 0.88, 0.51
	Linux addons-748280 5.15.0-1037-aws #41~20.04.1-Ubuntu SMP Mon May 22 18:20:20 UTC 2023 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.2 LTS"
	
	* 
	* ==> kindnet [7101bc75e91f244083f5e8fbbbea1a1dfaac460ae9023958afc3251cc0a02ea9] <==
	* I0531 18:48:08.913832       1 main.go:227] handling current node
	I0531 18:48:18.917796       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0531 18:48:18.917827       1 main.go:227] handling current node
	I0531 18:48:28.921666       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0531 18:48:28.921695       1 main.go:227] handling current node
	I0531 18:48:38.931804       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0531 18:48:38.931833       1 main.go:227] handling current node
	I0531 18:48:48.942574       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0531 18:48:48.942602       1 main.go:227] handling current node
	I0531 18:48:58.955122       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0531 18:48:58.955145       1 main.go:227] handling current node
	I0531 18:49:08.959378       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0531 18:49:08.959408       1 main.go:227] handling current node
	I0531 18:49:18.970550       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0531 18:49:18.970576       1 main.go:227] handling current node
	I0531 18:49:28.982344       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0531 18:49:28.982376       1 main.go:227] handling current node
	I0531 18:49:38.986403       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0531 18:49:38.986432       1 main.go:227] handling current node
	I0531 18:49:48.998465       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0531 18:49:48.998492       1 main.go:227] handling current node
	I0531 18:49:59.006922       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0531 18:49:59.006954       1 main.go:227] handling current node
	I0531 18:50:09.011909       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0531 18:50:09.011940       1 main.go:227] handling current node
	
	* 
	* ==> kube-apiserver [b2472317bc595f939b64447cf1f262c4adf9aebe969c91e002ddaa9571020a29] <==
	* E0531 18:46:09.311319       1 dispatcher.go:206] failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.111.7.220:443: connect: connection refused
	W0531 18:46:09.312285       1 dispatcher.go:202] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.111.7.220:443: connect: connection refused
	E0531 18:46:09.312375       1 dispatcher.go:206] failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.111.7.220:443: connect: connection refused
	I0531 18:46:16.097664       1 handler_discovery.go:325] DiscoveryManager: Failed to download discovery for kube-system/metrics-server:443: 503 error trying to reach service: dial tcp 10.100.179.33:443: connect: connection refused
	I0531 18:46:16.097748       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E0531 18:46:16.100161       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.100.179.33:443/apis/metrics.k8s.io/v1beta1: Get "https://10.100.179.33:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.100.179.33:443: connect: connection refused
	I0531 18:46:16.211630       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0531 18:46:18.114777       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0531 18:47:17.173788       1 controller.go:132] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	E0531 18:47:17.190004       1 handler_proxy.go:144] error resolving kube-system/metrics-server: service "metrics-server" not found
	W0531 18:47:17.190036       1 handler_proxy.go:100] no RequestInfo found in the context
	E0531 18:47:17.190078       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0531 18:47:17.190086       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0531 18:47:21.710027       1 handler.go:232] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	I0531 18:47:21.747025       1 handler.go:232] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0531 18:47:22.802016       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0531 18:47:28.006633       1 controller.go:624] quota admission added evaluator for: ingresses.networking.k8s.io
	I0531 18:47:28.447126       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs=map[IPv4:10.106.134.191]
	E0531 18:48:17.190516       1 handler_proxy.go:144] error resolving kube-system/metrics-server: service "metrics-server" not found
	W0531 18:48:17.190544       1 handler_proxy.go:100] no RequestInfo found in the context
	E0531 18:48:17.190589       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0531 18:48:17.190601       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0531 18:49:47.955022       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs=map[IPv4:10.107.203.236]
	
	* 
	* ==> kube-controller-manager [12917d226c926cb6e809dd8b7aa8859740561c32bbb851945bfa4aaa99a74f3d] <==
	* E0531 18:47:26.311898       1 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0531 18:47:31.532528       1 reflector.go:533] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0531 18:47:31.532582       1 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0531 18:47:31.815009       1 namespace_controller.go:182] "Namespace has been deleted" namespace="gadget"
	I0531 18:47:34.127216       1 shared_informer.go:311] Waiting for caches to sync for resource quota
	I0531 18:47:34.127253       1 shared_informer.go:318] Caches are synced for resource quota
	I0531 18:47:34.497089       1 shared_informer.go:311] Waiting for caches to sync for garbage collector
	I0531 18:47:34.497142       1 shared_informer.go:318] Caches are synced for garbage collector
	W0531 18:47:41.078475       1 reflector.go:533] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0531 18:47:41.078509       1 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0531 18:48:03.377690       1 reflector.go:533] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0531 18:48:03.377818       1 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0531 18:48:42.748225       1 reflector.go:533] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0531 18:48:42.748260       1 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0531 18:49:24.269000       1 reflector.go:533] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0531 18:49:24.269131       1 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0531 18:49:47.640190       1 event.go:307] "Event occurred" object="default/hello-world-app" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set hello-world-app-65bdb79f98 to 1"
	I0531 18:49:47.681005       1 event.go:307] "Event occurred" object="default/hello-world-app-65bdb79f98" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: hello-world-app-65bdb79f98-vk6p5"
	I0531 18:50:05.120022       1 job_controller.go:523] enqueueing job ingress-nginx/ingress-nginx-admission-create
	I0531 18:50:05.120366       1 job_controller.go:523] enqueueing job ingress-nginx/ingress-nginx-admission-patch
	I0531 18:50:05.775289       1 event.go:307] "Event occurred" object="default/hpvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="waiting for a volume to be created, either by external provisioner \"hostpath.csi.k8s.io\" or manually created by system administrator"
	I0531 18:50:05.776250       1 event.go:307] "Event occurred" object="default/hpvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="waiting for a volume to be created, either by external provisioner \"hostpath.csi.k8s.io\" or manually created by system administrator"
	W0531 18:50:07.335941       1 reflector.go:533] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0531 18:50:07.336079       1 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0531 18:50:12.501371       1 event.go:307] "Event occurred" object="default/hpvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="waiting for a volume to be created, either by external provisioner \"hostpath.csi.k8s.io\" or manually created by system administrator"
	
	* 
	* ==> kube-proxy [6a504f33e4da1ef97d3eac91b1c1ebb5230daefb710a550c1682f472101c1723] <==
	* I0531 18:45:40.167202       1 node.go:141] Successfully retrieved node IP: 192.168.49.2
	I0531 18:45:40.191872       1 server_others.go:110] "Detected node IP" address="192.168.49.2"
	I0531 18:45:40.192624       1 server_others.go:551] "Using iptables proxy"
	I0531 18:45:40.366606       1 server_others.go:190] "Using iptables Proxier"
	I0531 18:45:40.366723       1 server_others.go:197] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0531 18:45:40.366792       1 server_others.go:198] "Creating dualStackProxier for iptables"
	I0531 18:45:40.366831       1 server_others.go:481] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, defaulting to no-op detect-local for IPv6"
	I0531 18:45:40.366917       1 proxier.go:253] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0531 18:45:40.367585       1 server.go:657] "Version info" version="v1.27.2"
	I0531 18:45:40.367842       1 server.go:659] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0531 18:45:40.368684       1 config.go:188] "Starting service config controller"
	I0531 18:45:40.368792       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0531 18:45:40.368854       1 config.go:97] "Starting endpoint slice config controller"
	I0531 18:45:40.368900       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0531 18:45:40.369510       1 config.go:315] "Starting node config controller"
	I0531 18:45:40.370382       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0531 18:45:40.471004       1 shared_informer.go:318] Caches are synced for node config
	I0531 18:45:40.472771       1 shared_informer.go:318] Caches are synced for service config
	I0531 18:45:40.472800       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	* 
	* ==> kube-scheduler [622e30ad34794be09d941ef4c5989fb69f9afc992fb535d2fa37f71359c8e0ed] <==
	* W0531 18:45:18.412195       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0531 18:45:18.415387       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0531 18:45:18.412248       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0531 18:45:18.415454       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0531 18:45:18.412323       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0531 18:45:18.415518       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0531 18:45:18.412394       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0531 18:45:18.415589       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0531 18:45:18.412428       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0531 18:45:18.415665       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0531 18:45:18.412485       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0531 18:45:18.415730       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0531 18:45:19.280880       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0531 18:45:19.280917       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0531 18:45:19.298490       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0531 18:45:19.298609       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0531 18:45:19.299701       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0531 18:45:19.299788       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0531 18:45:19.325013       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0531 18:45:19.325110       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0531 18:45:19.337529       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0531 18:45:19.337631       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0531 18:45:19.371572       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0531 18:45:19.371702       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0531 18:45:19.880697       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* May 31 18:50:13 addons-748280 kubelet[1356]: E0531 18:50:13.291866    1356 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="60b6737d-85a4-4be3-a343-c649a32d5573" containerName="minikube-ingress-dns"
	May 31 18:50:13 addons-748280 kubelet[1356]: E0531 18:50:13.291876    1356 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="493fae28-0951-494b-9648-19fa4d464abb" containerName="controller"
	May 31 18:50:13 addons-748280 kubelet[1356]: E0531 18:50:13.291886    1356 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="60b6737d-85a4-4be3-a343-c649a32d5573" containerName="minikube-ingress-dns"
	May 31 18:50:13 addons-748280 kubelet[1356]: E0531 18:50:13.291894    1356 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="6510a1f0-5ba2-49e1-8749-6a1b8101c599" containerName="registry-proxy"
	May 31 18:50:13 addons-748280 kubelet[1356]: E0531 18:50:13.291902    1356 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="6510a1f0-5ba2-49e1-8749-6a1b8101c599" containerName="registry-proxy"
	May 31 18:50:13 addons-748280 kubelet[1356]: E0531 18:50:13.291909    1356 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="60b6737d-85a4-4be3-a343-c649a32d5573" containerName="minikube-ingress-dns"
	May 31 18:50:13 addons-748280 kubelet[1356]: E0531 18:50:13.291917    1356 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="60b6737d-85a4-4be3-a343-c649a32d5573" containerName="minikube-ingress-dns"
	May 31 18:50:13 addons-748280 kubelet[1356]: E0531 18:50:13.291924    1356 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="60b6737d-85a4-4be3-a343-c649a32d5573" containerName="minikube-ingress-dns"
	May 31 18:50:13 addons-748280 kubelet[1356]: E0531 18:50:13.291932    1356 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="98c2e1ee-6d1b-4140-a410-92c62d5b0c8e" containerName="registry"
	May 31 18:50:13 addons-748280 kubelet[1356]: I0531 18:50:13.291967    1356 memory_manager.go:346] "RemoveStaleState removing state" podUID="6510a1f0-5ba2-49e1-8749-6a1b8101c599" containerName="registry-proxy"
	May 31 18:50:13 addons-748280 kubelet[1356]: I0531 18:50:13.291975    1356 memory_manager.go:346] "RemoveStaleState removing state" podUID="98c2e1ee-6d1b-4140-a410-92c62d5b0c8e" containerName="registry"
	May 31 18:50:13 addons-748280 kubelet[1356]: I0531 18:50:13.291982    1356 memory_manager.go:346] "RemoveStaleState removing state" podUID="6510a1f0-5ba2-49e1-8749-6a1b8101c599" containerName="registry-proxy"
	May 31 18:50:13 addons-748280 kubelet[1356]: I0531 18:50:13.291989    1356 memory_manager.go:346] "RemoveStaleState removing state" podUID="493fae28-0951-494b-9648-19fa4d464abb" containerName="controller"
	May 31 18:50:13 addons-748280 kubelet[1356]: I0531 18:50:13.291997    1356 memory_manager.go:346] "RemoveStaleState removing state" podUID="60b6737d-85a4-4be3-a343-c649a32d5573" containerName="minikube-ingress-dns"
	May 31 18:50:13 addons-748280 kubelet[1356]: I0531 18:50:13.292005    1356 memory_manager.go:346] "RemoveStaleState removing state" podUID="60b6737d-85a4-4be3-a343-c649a32d5573" containerName="minikube-ingress-dns"
	May 31 18:50:13 addons-748280 kubelet[1356]: I0531 18:50:13.292012    1356 memory_manager.go:346] "RemoveStaleState removing state" podUID="6510a1f0-5ba2-49e1-8749-6a1b8101c599" containerName="registry-proxy"
	May 31 18:50:13 addons-748280 kubelet[1356]: I0531 18:50:13.292020    1356 memory_manager.go:346] "RemoveStaleState removing state" podUID="60b6737d-85a4-4be3-a343-c649a32d5573" containerName="minikube-ingress-dns"
	May 31 18:50:13 addons-748280 kubelet[1356]: I0531 18:50:13.292026    1356 memory_manager.go:346] "RemoveStaleState removing state" podUID="60b6737d-85a4-4be3-a343-c649a32d5573" containerName="minikube-ingress-dns"
	May 31 18:50:13 addons-748280 kubelet[1356]: I0531 18:50:13.292033    1356 memory_manager.go:346] "RemoveStaleState removing state" podUID="6510a1f0-5ba2-49e1-8749-6a1b8101c599" containerName="registry-proxy"
	May 31 18:50:13 addons-748280 kubelet[1356]: I0531 18:50:13.431574    1356 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cj65m\" (UniqueName: \"kubernetes.io/projected/56674f4a-cc7d-413f-8bd9-683f21f19e7d-kube-api-access-cj65m\") pod \"task-pv-pod\" (UID: \"56674f4a-cc7d-413f-8bd9-683f21f19e7d\") " pod="default/task-pv-pod"
	May 31 18:50:13 addons-748280 kubelet[1356]: I0531 18:50:13.431628    1356 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-d3fd77d9-44a5-4ea4-b310-f7b55ddbbdec\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^f9658551-ffe3-11ed-9747-d61950d5ab1b\") pod \"task-pv-pod\" (UID: \"56674f4a-cc7d-413f-8bd9-683f21f19e7d\") " pod="default/task-pv-pod"
	May 31 18:50:13 addons-748280 kubelet[1356]: I0531 18:50:13.431657    1356 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/56674f4a-cc7d-413f-8bd9-683f21f19e7d-gcp-creds\") pod \"task-pv-pod\" (UID: \"56674f4a-cc7d-413f-8bd9-683f21f19e7d\") " pod="default/task-pv-pod"
	May 31 18:50:13 addons-748280 kubelet[1356]: I0531 18:50:13.555409    1356 operation_generator.go:661] "MountVolume.MountDevice succeeded for volume \"pvc-d3fd77d9-44a5-4ea4-b310-f7b55ddbbdec\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^f9658551-ffe3-11ed-9747-d61950d5ab1b\") pod \"task-pv-pod\" (UID: \"56674f4a-cc7d-413f-8bd9-683f21f19e7d\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/hostpath.csi.k8s.io/b60cd989f82af301827399f8cf4cf9099c4bfa9c6a1de759b6505171160e44f2/globalmount\"" pod="default/task-pv-pod"
	May 31 18:50:13 addons-748280 kubelet[1356]: W0531 18:50:13.650459    1356 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/7a58041a0a26da39225fc1484cf566703381ab580d380a421227d99124b51cb9/crio/crio-eb685c59de81dba014f106234c807b7d63afa975a5f799635b0597d73d5c1ebd WatchSource:0}: Error finding container eb685c59de81dba014f106234c807b7d63afa975a5f799635b0597d73d5c1ebd: Status 404 returned error can't find the container with id eb685c59de81dba014f106234c807b7d63afa975a5f799635b0597d73d5c1ebd
	May 31 18:50:14 addons-748280 kubelet[1356]: W0531 18:50:14.243847    1356 container.go:586] Failed to update stats for container "/docker/7a58041a0a26da39225fc1484cf566703381ab580d380a421227d99124b51cb9/crio/crio-73c8e7c64ec694923645654fc764b86db67867e35ce7ac47365570033550487f": unable to determine device info for dir: /var/lib/containers/storage/overlay/c35b0c8079230e580b179eaaf14e83e9a3d621e2545b44df2ca93673c63fe8f1/diff: stat failed on /var/lib/containers/storage/overlay/c35b0c8079230e580b179eaaf14e83e9a3d621e2545b44df2ca93673c63fe8f1/diff with error: no such file or directory, continuing to push stats
	
	* 
	* ==> storage-provisioner [2b333ed8ee445024a127328209074caea001d936536b386c50866ea28a97614e] <==
	* I0531 18:46:10.183687       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0531 18:46:10.197858       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0531 18:46:10.197937       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0531 18:46:10.207243       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0531 18:46:10.207509       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-748280_0c5281de-a47e-49ae-ac9f-f8755360573b!
	I0531 18:46:10.208493       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"b56dbde0-2bf3-41e0-98ff-a56fe3b3072e", APIVersion:"v1", ResourceVersion:"847", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-748280_0c5281de-a47e-49ae-ac9f-f8755360573b became leader
	I0531 18:46:10.307914       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-748280_0c5281de-a47e-49ae-ac9f-f8755360573b!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-748280 -n addons-748280
helpers_test.go:261: (dbg) Run:  kubectl --context addons-748280 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: task-pv-pod
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-748280 describe pod task-pv-pod
helpers_test.go:282: (dbg) kubectl --context addons-748280 describe pod task-pv-pod:

                                                
                                                
-- stdout --
	Name:             task-pv-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-748280/192.168.49.2
	Start Time:       Wed, 31 May 2023 18:50:13 +0000
	Labels:           app=task-pv-pod
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Containers:
	  task-pv-container:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ContainerCreating
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /usr/share/nginx/html from task-pv-storage (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-cj65m (ro)
	Conditions:
	  Type              Status
	  Initialized       True 
	  Ready             False 
	  ContainersReady   False 
	  PodScheduled      True 
	Volumes:
	  task-pv-storage:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  hpvc
	    ReadOnly:   false
	  kube-api-access-cj65m:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  2s    default-scheduler  Successfully assigned default/task-pv-pod to addons-748280
	  Normal  Pulling    2s    kubelet            Pulling image "docker.io/nginx"

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestAddons/parallel/Ingress FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Ingress (168.15s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (175.5s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:183: (dbg) Run:  kubectl --context ingress-addon-legacy-546551 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:183: (dbg) Done: kubectl --context ingress-addon-legacy-546551 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (9.735877747s)
addons_test.go:208: (dbg) Run:  kubectl --context ingress-addon-legacy-546551 replace --force -f testdata/nginx-ingress-v1beta1.yaml
addons_test.go:221: (dbg) Run:  kubectl --context ingress-addon-legacy-546551 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:226: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [9b056243-1c72-47ed-8ddc-6521813a9f3b] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [9b056243-1c72-47ed-8ddc-6521813a9f3b] Running
addons_test.go:226: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: run=nginx healthy within 9.01410725s
addons_test.go:238: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-546551 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
E0531 19:00:18.520852    7804 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/functional-747104/client.crt: no such file or directory
E0531 19:00:18.526163    7804 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/functional-747104/client.crt: no such file or directory
E0531 19:00:18.536623    7804 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/functional-747104/client.crt: no such file or directory
E0531 19:00:18.556867    7804 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/functional-747104/client.crt: no such file or directory
E0531 19:00:18.597165    7804 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/functional-747104/client.crt: no such file or directory
E0531 19:00:18.677439    7804 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/functional-747104/client.crt: no such file or directory
E0531 19:00:18.837698    7804 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/functional-747104/client.crt: no such file or directory
E0531 19:00:19.158231    7804 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/functional-747104/client.crt: no such file or directory
E0531 19:00:19.798571    7804 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/functional-747104/client.crt: no such file or directory
E0531 19:00:21.078851    7804 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/functional-747104/client.crt: no such file or directory
E0531 19:00:23.639068    7804 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/functional-747104/client.crt: no such file or directory
E0531 19:00:28.760194    7804 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/functional-747104/client.crt: no such file or directory
E0531 19:00:39.001327    7804 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/functional-747104/client.crt: no such file or directory
E0531 19:00:59.481561    7804 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/functional-747104/client.crt: no such file or directory
addons_test.go:238: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ingress-addon-legacy-546551 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m9.917389336s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:254: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:262: (dbg) Run:  kubectl --context ingress-addon-legacy-546551 replace --force -f testdata/ingress-dns-example-v1beta1.yaml
addons_test.go:267: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-546551 ip
addons_test.go:273: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:273: (dbg) Non-zero exit: nslookup hello-john.test 192.168.49.2: exit status 1 (15.033444681s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
addons_test.go:275: failed to nslookup hello-john.test host. args "nslookup hello-john.test 192.168.49.2" : exit status 1
addons_test.go:279: unexpected output from nslookup. stdout: ;; connection timed out; no servers could be reached

                                                
                                                

                                                
                                                

                                                
                                                
stderr: 
addons_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-546551 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:287: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-546551 addons disable ingress --alsologtostderr -v=1
addons_test.go:287: (dbg) Done: out/minikube-linux-arm64 -p ingress-addon-legacy-546551 addons disable ingress --alsologtostderr -v=1: (7.334116486s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ingress-addon-legacy-546551
helpers_test.go:235: (dbg) docker inspect ingress-addon-legacy-546551:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "30fd7e50ddb328fe0f774d766989974307ba682c6cc0d769d60d19d0ae950633",
	        "Created": "2023-05-31T18:56:40.955303808Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 35843,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-05-31T18:56:41.296189552Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:dee4774de4f99268e16c76379a36a4607bed47635a069a7e60c17cd24d9aaa76",
	        "ResolvConfPath": "/var/lib/docker/containers/30fd7e50ddb328fe0f774d766989974307ba682c6cc0d769d60d19d0ae950633/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/30fd7e50ddb328fe0f774d766989974307ba682c6cc0d769d60d19d0ae950633/hostname",
	        "HostsPath": "/var/lib/docker/containers/30fd7e50ddb328fe0f774d766989974307ba682c6cc0d769d60d19d0ae950633/hosts",
	        "LogPath": "/var/lib/docker/containers/30fd7e50ddb328fe0f774d766989974307ba682c6cc0d769d60d19d0ae950633/30fd7e50ddb328fe0f774d766989974307ba682c6cc0d769d60d19d0ae950633-json.log",
	        "Name": "/ingress-addon-legacy-546551",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ingress-addon-legacy-546551:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ingress-addon-legacy-546551",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/d14b98328d6ef53c311ed5581ca6cb59f15ddc5ac2b77218252777c2d1eb8c81-init/diff:/var/lib/docker/overlay2/548bced7e749d102323bab71db162b075785f916e2a896d29f3adc2c3d7fbea8/diff",
	                "MergedDir": "/var/lib/docker/overlay2/d14b98328d6ef53c311ed5581ca6cb59f15ddc5ac2b77218252777c2d1eb8c81/merged",
	                "UpperDir": "/var/lib/docker/overlay2/d14b98328d6ef53c311ed5581ca6cb59f15ddc5ac2b77218252777c2d1eb8c81/diff",
	                "WorkDir": "/var/lib/docker/overlay2/d14b98328d6ef53c311ed5581ca6cb59f15ddc5ac2b77218252777c2d1eb8c81/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ingress-addon-legacy-546551",
	                "Source": "/var/lib/docker/volumes/ingress-addon-legacy-546551/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ingress-addon-legacy-546551",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ingress-addon-legacy-546551",
	                "name.minikube.sigs.k8s.io": "ingress-addon-legacy-546551",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "e9fe6bc0b11485ea9555db5fa661bb2e81ac4507c1d51b18f054718f81b485e5",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32787"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32786"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32783"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32785"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32784"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/e9fe6bc0b114",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ingress-addon-legacy-546551": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "30fd7e50ddb3",
	                        "ingress-addon-legacy-546551"
	                    ],
	                    "NetworkID": "01c2a210eb79cbaca68ad388869d2b606972b6c6558b6bb468913a634c4ee939",
	                    "EndpointID": "2d9a9c69ebec7b6d6e39999931e8a64309ffc111b9c1168877a117cf29c2e162",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p ingress-addon-legacy-546551 -n ingress-addon-legacy-546551
helpers_test.go:244: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-546551 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p ingress-addon-legacy-546551 logs -n 25: (1.398109638s)
helpers_test.go:252: TestIngressAddonLegacy/serial/ValidateIngressAddons logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |----------------|--------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	|    Command     |                 Args                 |           Profile           |  User   | Version |     Start Time      |      End Time       |
	|----------------|--------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	| ssh            | functional-747104 ssh findmnt        | functional-747104           | jenkins | v1.30.1 | 31 May 23 18:56 UTC | 31 May 23 18:56 UTC |
	|                | -T /mount2                           |                             |         |         |                     |                     |
	| ssh            | functional-747104 ssh findmnt        | functional-747104           | jenkins | v1.30.1 | 31 May 23 18:56 UTC | 31 May 23 18:56 UTC |
	|                | -T /mount3                           |                             |         |         |                     |                     |
	| start          | -p functional-747104                 | functional-747104           | jenkins | v1.30.1 | 31 May 23 18:56 UTC |                     |
	|                | --dry-run --memory                   |                             |         |         |                     |                     |
	|                | 250MB --alsologtostderr              |                             |         |         |                     |                     |
	|                | --driver=docker                      |                             |         |         |                     |                     |
	|                | --container-runtime=crio             |                             |         |         |                     |                     |
	| mount          | -p functional-747104                 | functional-747104           | jenkins | v1.30.1 | 31 May 23 18:56 UTC |                     |
	|                | --kill=true                          |                             |         |         |                     |                     |
	| start          | -p functional-747104                 | functional-747104           | jenkins | v1.30.1 | 31 May 23 18:56 UTC |                     |
	|                | --dry-run --memory                   |                             |         |         |                     |                     |
	|                | 250MB --alsologtostderr              |                             |         |         |                     |                     |
	|                | --driver=docker                      |                             |         |         |                     |                     |
	|                | --container-runtime=crio             |                             |         |         |                     |                     |
	| start          | -p functional-747104                 | functional-747104           | jenkins | v1.30.1 | 31 May 23 18:56 UTC |                     |
	|                | --dry-run --alsologtostderr          |                             |         |         |                     |                     |
	|                | -v=1 --driver=docker                 |                             |         |         |                     |                     |
	|                | --container-runtime=crio             |                             |         |         |                     |                     |
	| dashboard      | --url --port 36195                   | functional-747104           | jenkins | v1.30.1 | 31 May 23 18:56 UTC | 31 May 23 18:56 UTC |
	|                | -p functional-747104                 |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1               |                             |         |         |                     |                     |
	| update-context | functional-747104                    | functional-747104           | jenkins | v1.30.1 | 31 May 23 18:56 UTC | 31 May 23 18:56 UTC |
	|                | update-context                       |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2               |                             |         |         |                     |                     |
	| update-context | functional-747104                    | functional-747104           | jenkins | v1.30.1 | 31 May 23 18:56 UTC | 31 May 23 18:56 UTC |
	|                | update-context                       |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2               |                             |         |         |                     |                     |
	| update-context | functional-747104                    | functional-747104           | jenkins | v1.30.1 | 31 May 23 18:56 UTC | 31 May 23 18:56 UTC |
	|                | update-context                       |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2               |                             |         |         |                     |                     |
	| image          | functional-747104                    | functional-747104           | jenkins | v1.30.1 | 31 May 23 18:56 UTC | 31 May 23 18:56 UTC |
	|                | image ls --format short              |                             |         |         |                     |                     |
	|                | --alsologtostderr                    |                             |         |         |                     |                     |
	| image          | functional-747104                    | functional-747104           | jenkins | v1.30.1 | 31 May 23 18:56 UTC | 31 May 23 18:56 UTC |
	|                | image ls --format yaml               |                             |         |         |                     |                     |
	|                | --alsologtostderr                    |                             |         |         |                     |                     |
	| ssh            | functional-747104 ssh pgrep          | functional-747104           | jenkins | v1.30.1 | 31 May 23 18:56 UTC |                     |
	|                | buildkitd                            |                             |         |         |                     |                     |
	| image          | functional-747104 image build -t     | functional-747104           | jenkins | v1.30.1 | 31 May 23 18:56 UTC | 31 May 23 18:56 UTC |
	|                | localhost/my-image:functional-747104 |                             |         |         |                     |                     |
	|                | testdata/build --alsologtostderr     |                             |         |         |                     |                     |
	| image          | functional-747104 image ls           | functional-747104           | jenkins | v1.30.1 | 31 May 23 18:56 UTC | 31 May 23 18:56 UTC |
	| image          | functional-747104                    | functional-747104           | jenkins | v1.30.1 | 31 May 23 18:56 UTC | 31 May 23 18:56 UTC |
	|                | image ls --format json               |                             |         |         |                     |                     |
	|                | --alsologtostderr                    |                             |         |         |                     |                     |
	| image          | functional-747104                    | functional-747104           | jenkins | v1.30.1 | 31 May 23 18:56 UTC | 31 May 23 18:56 UTC |
	|                | image ls --format table              |                             |         |         |                     |                     |
	|                | --alsologtostderr                    |                             |         |         |                     |                     |
	| delete         | -p functional-747104                 | functional-747104           | jenkins | v1.30.1 | 31 May 23 18:56 UTC | 31 May 23 18:56 UTC |
	| start          | -p ingress-addon-legacy-546551       | ingress-addon-legacy-546551 | jenkins | v1.30.1 | 31 May 23 18:56 UTC | 31 May 23 18:58 UTC |
	|                | --kubernetes-version=v1.18.20        |                             |         |         |                     |                     |
	|                | --memory=4096 --wait=true            |                             |         |         |                     |                     |
	|                | --alsologtostderr                    |                             |         |         |                     |                     |
	|                | -v=5 --driver=docker                 |                             |         |         |                     |                     |
	|                | --container-runtime=crio             |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-546551          | ingress-addon-legacy-546551 | jenkins | v1.30.1 | 31 May 23 18:58 UTC | 31 May 23 18:58 UTC |
	|                | addons enable ingress                |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=5               |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-546551          | ingress-addon-legacy-546551 | jenkins | v1.30.1 | 31 May 23 18:58 UTC | 31 May 23 18:58 UTC |
	|                | addons enable ingress-dns            |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=5               |                             |         |         |                     |                     |
	| ssh            | ingress-addon-legacy-546551          | ingress-addon-legacy-546551 | jenkins | v1.30.1 | 31 May 23 18:58 UTC |                     |
	|                | ssh curl -s http://127.0.0.1/        |                             |         |         |                     |                     |
	|                | -H 'Host: nginx.example.com'         |                             |         |         |                     |                     |
	| ip             | ingress-addon-legacy-546551 ip       | ingress-addon-legacy-546551 | jenkins | v1.30.1 | 31 May 23 19:01 UTC | 31 May 23 19:01 UTC |
	| addons         | ingress-addon-legacy-546551          | ingress-addon-legacy-546551 | jenkins | v1.30.1 | 31 May 23 19:01 UTC | 31 May 23 19:01 UTC |
	|                | addons disable ingress-dns           |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1               |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-546551          | ingress-addon-legacy-546551 | jenkins | v1.30.1 | 31 May 23 19:01 UTC | 31 May 23 19:01 UTC |
	|                | addons disable ingress               |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1               |                             |         |         |                     |                     |
	|----------------|--------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/05/31 18:56:22
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.20.4 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0531 18:56:22.374957   35394 out.go:296] Setting OutFile to fd 1 ...
	I0531 18:56:22.375170   35394 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0531 18:56:22.375191   35394 out.go:309] Setting ErrFile to fd 2...
	I0531 18:56:22.375211   35394 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0531 18:56:22.375401   35394 root.go:336] Updating PATH: /home/jenkins/minikube-integration/16569-2389/.minikube/bin
	I0531 18:56:22.375880   35394 out.go:303] Setting JSON to false
	I0531 18:56:22.377229   35394 start.go:127] hostinfo: {"hostname":"ip-172-31-21-244","uptime":2328,"bootTime":1685557055,"procs":482,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1037-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0531 18:56:22.377343   35394 start.go:137] virtualization:  
	I0531 18:56:22.380131   35394 out.go:177] * [ingress-addon-legacy-546551] minikube v1.30.1 on Ubuntu 20.04 (arm64)
	I0531 18:56:22.382449   35394 out.go:177]   - MINIKUBE_LOCATION=16569
	I0531 18:56:22.384534   35394 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0531 18:56:22.382541   35394 notify.go:220] Checking for updates...
	I0531 18:56:22.386594   35394 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/16569-2389/kubeconfig
	I0531 18:56:22.388299   35394 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/16569-2389/.minikube
	I0531 18:56:22.390829   35394 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0531 18:56:22.392779   35394 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0531 18:56:22.395183   35394 driver.go:375] Setting default libvirt URI to qemu:///system
	I0531 18:56:22.422642   35394 docker.go:121] docker version: linux-24.0.2:Docker Engine - Community
	I0531 18:56:22.422754   35394 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0531 18:56:22.501669   35394 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:24 OomKillDisable:true NGoroutines:36 SystemTime:2023-05-31 18:56:22.492160738 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1037-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215166976 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.5] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.18.1]] Warnings:<nil>}}
	I0531 18:56:22.501784   35394 docker.go:294] overlay module found
	I0531 18:56:22.505412   35394 out.go:177] * Using the docker driver based on user configuration
	I0531 18:56:22.507135   35394 start.go:297] selected driver: docker
	I0531 18:56:22.507154   35394 start.go:875] validating driver "docker" against <nil>
	I0531 18:56:22.507167   35394 start.go:886] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0531 18:56:22.507830   35394 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0531 18:56:22.567714   35394 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:24 OomKillDisable:true NGoroutines:36 SystemTime:2023-05-31 18:56:22.558456495 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1037-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215166976 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.5] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.18.1]] Warnings:<nil>}}
	I0531 18:56:22.567868   35394 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0531 18:56:22.568098   35394 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0531 18:56:22.570084   35394 out.go:177] * Using Docker driver with root privileges
	I0531 18:56:22.571754   35394 cni.go:84] Creating CNI manager for ""
	I0531 18:56:22.571771   35394 cni.go:142] "docker" driver + "crio" runtime found, recommending kindnet
	I0531 18:56:22.571781   35394 start_flags.go:314] Found "CNI" CNI - setting NetworkPlugin=cni
	I0531 18:56:22.571791   35394 start_flags.go:319] config:
	{Name:ingress-addon-legacy-546551 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-546551 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.lo
cal ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0531 18:56:22.574025   35394 out.go:177] * Starting control plane node ingress-addon-legacy-546551 in cluster ingress-addon-legacy-546551
	I0531 18:56:22.576016   35394 cache.go:122] Beginning downloading kic base image for docker with crio
	I0531 18:56:22.578383   35394 out.go:177] * Pulling base image ...
	I0531 18:56:22.580653   35394 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I0531 18:56:22.580738   35394 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 in local docker daemon
	I0531 18:56:22.598298   35394 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 in local docker daemon, skipping pull
	I0531 18:56:22.598340   35394 cache.go:145] gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 exists in daemon, skipping load
	I0531 18:56:22.654829   35394 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4
	I0531 18:56:22.654852   35394 cache.go:57] Caching tarball of preloaded images
	I0531 18:56:22.655700   35394 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I0531 18:56:22.657858   35394 out.go:177] * Downloading Kubernetes v1.18.20 preload ...
	I0531 18:56:22.659554   35394 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4 ...
	I0531 18:56:22.787077   35394 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4?checksum=md5:8ddd7f37d9a9977fe856222993d36c3d -> /home/jenkins/minikube-integration/16569-2389/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4
	I0531 18:56:33.039073   35394 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4 ...
	I0531 18:56:33.039695   35394 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/16569-2389/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4 ...
	I0531 18:56:34.145646   35394 cache.go:60] Finished verifying existence of preloaded tar for  v1.18.20 on crio
	I0531 18:56:34.146048   35394 profile.go:148] Saving config to /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/ingress-addon-legacy-546551/config.json ...
	I0531 18:56:34.146083   35394 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/ingress-addon-legacy-546551/config.json: {Name:mk6a544ab2f69f7adc65e7297054ca820e0ffabc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 18:56:34.146693   35394 cache.go:195] Successfully downloaded all kic artifacts
	I0531 18:56:34.146757   35394 start.go:364] acquiring machines lock for ingress-addon-legacy-546551: {Name:mk62e32d8c4fd7cbc0a41b5dbd152e895d44f2fc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0531 18:56:34.147328   35394 start.go:368] acquired machines lock for "ingress-addon-legacy-546551" in 553.947µs
	I0531 18:56:34.147356   35394 start.go:93] Provisioning new machine with config: &{Name:ingress-addon-legacy-546551 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-546551 Namespace:defau
lt APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Dis
ableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0531 18:56:34.147439   35394 start.go:125] createHost starting for "" (driver="docker")
	I0531 18:56:34.150016   35394 out.go:204] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I0531 18:56:34.150225   35394 start.go:159] libmachine.API.Create for "ingress-addon-legacy-546551" (driver="docker")
	I0531 18:56:34.150246   35394 client.go:168] LocalClient.Create starting
	I0531 18:56:34.150323   35394 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/16569-2389/.minikube/certs/ca.pem
	I0531 18:56:34.150361   35394 main.go:141] libmachine: Decoding PEM data...
	I0531 18:56:34.150380   35394 main.go:141] libmachine: Parsing certificate...
	I0531 18:56:34.150443   35394 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/16569-2389/.minikube/certs/cert.pem
	I0531 18:56:34.150465   35394 main.go:141] libmachine: Decoding PEM data...
	I0531 18:56:34.150481   35394 main.go:141] libmachine: Parsing certificate...
	I0531 18:56:34.150893   35394 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-546551 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0531 18:56:34.168322   35394 cli_runner.go:211] docker network inspect ingress-addon-legacy-546551 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0531 18:56:34.168419   35394 network_create.go:281] running [docker network inspect ingress-addon-legacy-546551] to gather additional debugging logs...
	I0531 18:56:34.168440   35394 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-546551
	W0531 18:56:34.188995   35394 cli_runner.go:211] docker network inspect ingress-addon-legacy-546551 returned with exit code 1
	I0531 18:56:34.189032   35394 network_create.go:284] error running [docker network inspect ingress-addon-legacy-546551]: docker network inspect ingress-addon-legacy-546551: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ingress-addon-legacy-546551 not found
	I0531 18:56:34.189048   35394 network_create.go:286] output of [docker network inspect ingress-addon-legacy-546551]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ingress-addon-legacy-546551 not found
	
	** /stderr **
	I0531 18:56:34.189130   35394 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0531 18:56:34.207980   35394 network.go:209] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4000f55620}
	I0531 18:56:34.208020   35394 network_create.go:123] attempt to create docker network ingress-addon-legacy-546551 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0531 18:56:34.208095   35394 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ingress-addon-legacy-546551 ingress-addon-legacy-546551
	I0531 18:56:34.285097   35394 network_create.go:107] docker network ingress-addon-legacy-546551 192.168.49.0/24 created
	I0531 18:56:34.285125   35394 kic.go:117] calculated static IP "192.168.49.2" for the "ingress-addon-legacy-546551" container
	I0531 18:56:34.285195   35394 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0531 18:56:34.301764   35394 cli_runner.go:164] Run: docker volume create ingress-addon-legacy-546551 --label name.minikube.sigs.k8s.io=ingress-addon-legacy-546551 --label created_by.minikube.sigs.k8s.io=true
	I0531 18:56:34.321261   35394 oci.go:103] Successfully created a docker volume ingress-addon-legacy-546551
	I0531 18:56:34.321353   35394 cli_runner.go:164] Run: docker run --rm --name ingress-addon-legacy-546551-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-546551 --entrypoint /usr/bin/test -v ingress-addon-legacy-546551:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 -d /var/lib
	I0531 18:56:35.829903   35394 cli_runner.go:217] Completed: docker run --rm --name ingress-addon-legacy-546551-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-546551 --entrypoint /usr/bin/test -v ingress-addon-legacy-546551:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 -d /var/lib: (1.508497049s)
	I0531 18:56:35.829930   35394 oci.go:107] Successfully prepared a docker volume ingress-addon-legacy-546551
	I0531 18:56:35.829957   35394 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I0531 18:56:35.829976   35394 kic.go:190] Starting extracting preloaded images to volume ...
	I0531 18:56:35.830065   35394 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/16569-2389/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-546551:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 -I lz4 -xf /preloaded.tar -C /extractDir
	I0531 18:56:40.875079   35394 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/16569-2389/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-546551:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 -I lz4 -xf /preloaded.tar -C /extractDir: (5.044971456s)
	I0531 18:56:40.875111   35394 kic.go:199] duration metric: took 5.045131 seconds to extract preloaded images to volume
	W0531 18:56:40.875256   35394 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0531 18:56:40.875362   35394 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0531 18:56:40.938760   35394 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ingress-addon-legacy-546551 --name ingress-addon-legacy-546551 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-546551 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ingress-addon-legacy-546551 --network ingress-addon-legacy-546551 --ip 192.168.49.2 --volume ingress-addon-legacy-546551:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8
	I0531 18:56:41.304304   35394 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-546551 --format={{.State.Running}}
	I0531 18:56:41.326876   35394 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-546551 --format={{.State.Status}}
	I0531 18:56:41.353884   35394 cli_runner.go:164] Run: docker exec ingress-addon-legacy-546551 stat /var/lib/dpkg/alternatives/iptables
	I0531 18:56:41.433169   35394 oci.go:144] the created container "ingress-addon-legacy-546551" has a running status.
	I0531 18:56:41.433194   35394 kic.go:221] Creating ssh key for kic: /home/jenkins/minikube-integration/16569-2389/.minikube/machines/ingress-addon-legacy-546551/id_rsa...
	I0531 18:56:42.258361   35394 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16569-2389/.minikube/machines/ingress-addon-legacy-546551/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0531 18:56:42.258476   35394 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/16569-2389/.minikube/machines/ingress-addon-legacy-546551/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0531 18:56:42.296540   35394 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-546551 --format={{.State.Status}}
	I0531 18:56:42.319779   35394 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0531 18:56:42.319800   35394 kic_runner.go:114] Args: [docker exec --privileged ingress-addon-legacy-546551 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0531 18:56:42.392034   35394 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-546551 --format={{.State.Status}}
	I0531 18:56:42.416305   35394 machine.go:88] provisioning docker machine ...
	I0531 18:56:42.416334   35394 ubuntu.go:169] provisioning hostname "ingress-addon-legacy-546551"
	I0531 18:56:42.416412   35394 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-546551
	I0531 18:56:42.441193   35394 main.go:141] libmachine: Using SSH client type: native
	I0531 18:56:42.441645   35394 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x39eaa0] 0x3a1430 <nil>  [] 0s} 127.0.0.1 32787 <nil> <nil>}
	I0531 18:56:42.441716   35394 main.go:141] libmachine: About to run SSH command:
	sudo hostname ingress-addon-legacy-546551 && echo "ingress-addon-legacy-546551" | sudo tee /etc/hostname
	I0531 18:56:42.596698   35394 main.go:141] libmachine: SSH cmd err, output: <nil>: ingress-addon-legacy-546551
	
	I0531 18:56:42.596775   35394 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-546551
	I0531 18:56:42.618416   35394 main.go:141] libmachine: Using SSH client type: native
	I0531 18:56:42.618864   35394 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x39eaa0] 0x3a1430 <nil>  [] 0s} 127.0.0.1 32787 <nil> <nil>}
	I0531 18:56:42.618884   35394 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\singress-addon-legacy-546551' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ingress-addon-legacy-546551/g' /etc/hosts;
				else 
					echo '127.0.1.1 ingress-addon-legacy-546551' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0531 18:56:42.756020   35394 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0531 18:56:42.756049   35394 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/16569-2389/.minikube CaCertPath:/home/jenkins/minikube-integration/16569-2389/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16569-2389/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16569-2389/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16569-2389/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16569-2389/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16569-2389/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16569-2389/.minikube}
	I0531 18:56:42.756082   35394 ubuntu.go:177] setting up certificates
	I0531 18:56:42.756092   35394 provision.go:83] configureAuth start
	I0531 18:56:42.756159   35394 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-546551
	I0531 18:56:42.775927   35394 provision.go:138] copyHostCerts
	I0531 18:56:42.775985   35394 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16569-2389/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/16569-2389/.minikube/ca.pem
	I0531 18:56:42.776033   35394 exec_runner.go:144] found /home/jenkins/minikube-integration/16569-2389/.minikube/ca.pem, removing ...
	I0531 18:56:42.776045   35394 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16569-2389/.minikube/ca.pem
	I0531 18:56:42.776157   35394 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16569-2389/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16569-2389/.minikube/ca.pem (1078 bytes)
	I0531 18:56:42.776265   35394 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16569-2389/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/16569-2389/.minikube/cert.pem
	I0531 18:56:42.776292   35394 exec_runner.go:144] found /home/jenkins/minikube-integration/16569-2389/.minikube/cert.pem, removing ...
	I0531 18:56:42.776305   35394 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16569-2389/.minikube/cert.pem
	I0531 18:56:42.776384   35394 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16569-2389/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16569-2389/.minikube/cert.pem (1123 bytes)
	I0531 18:56:42.776481   35394 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16569-2389/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/16569-2389/.minikube/key.pem
	I0531 18:56:42.776506   35394 exec_runner.go:144] found /home/jenkins/minikube-integration/16569-2389/.minikube/key.pem, removing ...
	I0531 18:56:42.776514   35394 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16569-2389/.minikube/key.pem
	I0531 18:56:42.776545   35394 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16569-2389/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16569-2389/.minikube/key.pem (1679 bytes)
	I0531 18:56:42.776623   35394 provision.go:112] generating server cert: /home/jenkins/minikube-integration/16569-2389/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16569-2389/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16569-2389/.minikube/certs/ca-key.pem org=jenkins.ingress-addon-legacy-546551 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube ingress-addon-legacy-546551]
	I0531 18:56:43.697948   35394 provision.go:172] copyRemoteCerts
	I0531 18:56:43.698050   35394 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0531 18:56:43.698101   35394 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-546551
	I0531 18:56:43.716638   35394 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/16569-2389/.minikube/machines/ingress-addon-legacy-546551/id_rsa Username:docker}
	I0531 18:56:43.813434   35394 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16569-2389/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0531 18:56:43.813510   35394 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16569-2389/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0531 18:56:43.842075   35394 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16569-2389/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0531 18:56:43.842135   35394 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16569-2389/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I0531 18:56:43.871370   35394 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16569-2389/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0531 18:56:43.871431   35394 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16569-2389/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0531 18:56:43.900087   35394 provision.go:86] duration metric: configureAuth took 1.143975777s
	I0531 18:56:43.900165   35394 ubuntu.go:193] setting minikube options for container-runtime
	I0531 18:56:43.900406   35394 config.go:182] Loaded profile config "ingress-addon-legacy-546551": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.20
	I0531 18:56:43.900534   35394 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-546551
	I0531 18:56:43.918965   35394 main.go:141] libmachine: Using SSH client type: native
	I0531 18:56:43.919404   35394 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x39eaa0] 0x3a1430 <nil>  [] 0s} 127.0.0.1 32787 <nil> <nil>}
	I0531 18:56:43.919420   35394 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0531 18:56:44.190625   35394 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0531 18:56:44.190699   35394 machine.go:91] provisioned docker machine in 1.774374652s
	I0531 18:56:44.190723   35394 client.go:171] LocalClient.Create took 10.040470543s
	I0531 18:56:44.190773   35394 start.go:167] duration metric: libmachine.API.Create for "ingress-addon-legacy-546551" took 10.040546785s
	I0531 18:56:44.190804   35394 start.go:300] post-start starting for "ingress-addon-legacy-546551" (driver="docker")
	I0531 18:56:44.190823   35394 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0531 18:56:44.190917   35394 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0531 18:56:44.190981   35394 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-546551
	I0531 18:56:44.209233   35394 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/16569-2389/.minikube/machines/ingress-addon-legacy-546551/id_rsa Username:docker}
	I0531 18:56:44.306017   35394 ssh_runner.go:195] Run: cat /etc/os-release
	I0531 18:56:44.310237   35394 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0531 18:56:44.310289   35394 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0531 18:56:44.310306   35394 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0531 18:56:44.310317   35394 info.go:137] Remote host: Ubuntu 22.04.2 LTS
	I0531 18:56:44.310326   35394 filesync.go:126] Scanning /home/jenkins/minikube-integration/16569-2389/.minikube/addons for local assets ...
	I0531 18:56:44.310397   35394 filesync.go:126] Scanning /home/jenkins/minikube-integration/16569-2389/.minikube/files for local assets ...
	I0531 18:56:44.310480   35394 filesync.go:149] local asset: /home/jenkins/minikube-integration/16569-2389/.minikube/files/etc/ssl/certs/78042.pem -> 78042.pem in /etc/ssl/certs
	I0531 18:56:44.310490   35394 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16569-2389/.minikube/files/etc/ssl/certs/78042.pem -> /etc/ssl/certs/78042.pem
	I0531 18:56:44.310606   35394 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0531 18:56:44.320964   35394 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16569-2389/.minikube/files/etc/ssl/certs/78042.pem --> /etc/ssl/certs/78042.pem (1708 bytes)
	I0531 18:56:44.349584   35394 start.go:303] post-start completed in 158.754151ms
	I0531 18:56:44.350007   35394 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-546551
	I0531 18:56:44.367984   35394 profile.go:148] Saving config to /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/ingress-addon-legacy-546551/config.json ...
	I0531 18:56:44.368281   35394 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0531 18:56:44.368338   35394 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-546551
	I0531 18:56:44.386369   35394 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/16569-2389/.minikube/machines/ingress-addon-legacy-546551/id_rsa Username:docker}
	I0531 18:56:44.480951   35394 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0531 18:56:44.487002   35394 start.go:128] duration metric: createHost completed in 10.339547775s
	I0531 18:56:44.487029   35394 start.go:83] releasing machines lock for "ingress-addon-legacy-546551", held for 10.339688524s
	I0531 18:56:44.487111   35394 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-546551
	I0531 18:56:44.505186   35394 ssh_runner.go:195] Run: cat /version.json
	I0531 18:56:44.505256   35394 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-546551
	I0531 18:56:44.505516   35394 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0531 18:56:44.505574   35394 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-546551
	I0531 18:56:44.530944   35394 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/16569-2389/.minikube/machines/ingress-addon-legacy-546551/id_rsa Username:docker}
	I0531 18:56:44.535184   35394 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/16569-2389/.minikube/machines/ingress-addon-legacy-546551/id_rsa Username:docker}
	I0531 18:56:44.766907   35394 ssh_runner.go:195] Run: systemctl --version
	I0531 18:56:44.772686   35394 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0531 18:56:44.922926   35394 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0531 18:56:44.928669   35394 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0531 18:56:44.952025   35394 cni.go:220] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0531 18:56:44.952105   35394 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0531 18:56:44.991591   35394 cni.go:261] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0531 18:56:44.991611   35394 start.go:481] detecting cgroup driver to use...
	I0531 18:56:44.991642   35394 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0531 18:56:44.991701   35394 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0531 18:56:45.027684   35394 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0531 18:56:45.043907   35394 docker.go:193] disabling cri-docker service (if available) ...
	I0531 18:56:45.044005   35394 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0531 18:56:45.065669   35394 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0531 18:56:45.085333   35394 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0531 18:56:45.194352   35394 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0531 18:56:45.308329   35394 docker.go:209] disabling docker service ...
	I0531 18:56:45.308567   35394 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0531 18:56:45.335698   35394 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0531 18:56:45.350608   35394 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0531 18:56:45.454357   35394 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0531 18:56:45.550761   35394 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0531 18:56:45.564861   35394 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0531 18:56:45.584384   35394 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0531 18:56:45.584448   35394 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0531 18:56:45.596666   35394 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0531 18:56:45.596745   35394 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0531 18:56:45.608727   35394 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0531 18:56:45.620773   35394 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0531 18:56:45.632665   35394 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0531 18:56:45.643849   35394 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0531 18:56:45.654443   35394 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0531 18:56:45.664854   35394 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0531 18:56:45.751811   35394 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0531 18:56:45.878302   35394 start.go:528] Will wait 60s for socket path /var/run/crio/crio.sock
	I0531 18:56:45.878377   35394 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0531 18:56:45.883161   35394 start.go:549] Will wait 60s for crictl version
	I0531 18:56:45.883222   35394 ssh_runner.go:195] Run: which crictl
	I0531 18:56:45.887647   35394 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0531 18:56:45.934715   35394 start.go:565] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.5
	RuntimeApiVersion:  v1
	I0531 18:56:45.934825   35394 ssh_runner.go:195] Run: crio --version
	I0531 18:56:45.985350   35394 ssh_runner.go:195] Run: crio --version
	I0531 18:56:46.033937   35394 out.go:177] * Preparing Kubernetes v1.18.20 on CRI-O 1.24.5 ...
	I0531 18:56:46.036082   35394 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-546551 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0531 18:56:46.054094   35394 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0531 18:56:46.058690   35394 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0531 18:56:46.072368   35394 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I0531 18:56:46.072448   35394 ssh_runner.go:195] Run: sudo crictl images --output json
	I0531 18:56:46.125595   35394 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.18.20". assuming images are not preloaded.
	I0531 18:56:46.125668   35394 ssh_runner.go:195] Run: which lz4
	I0531 18:56:46.130262   35394 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16569-2389/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4 -> /preloaded.tar.lz4
	I0531 18:56:46.130361   35394 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0531 18:56:46.134898   35394 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0531 18:56:46.134937   35394 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16569-2389/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4 --> /preloaded.tar.lz4 (489766197 bytes)
	I0531 18:56:48.551509   35394 crio.go:444] Took 2.421188 seconds to copy over tarball
	I0531 18:56:48.551627   35394 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0531 18:56:51.162854   35394 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.611196606s)
	I0531 18:56:51.162919   35394 crio.go:451] Took 2.611386 seconds to extract the tarball
	I0531 18:56:51.162934   35394 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0531 18:56:51.249443   35394 ssh_runner.go:195] Run: sudo crictl images --output json
	I0531 18:56:51.294911   35394 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.18.20". assuming images are not preloaded.
	I0531 18:56:51.294935   35394 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.18.20 registry.k8s.io/kube-controller-manager:v1.18.20 registry.k8s.io/kube-scheduler:v1.18.20 registry.k8s.io/kube-proxy:v1.18.20 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.3-0 registry.k8s.io/coredns:1.6.7 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0531 18:56:51.295010   35394 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.18.20
	I0531 18:56:51.295024   35394 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0531 18:56:51.295232   35394 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0531 18:56:51.295241   35394 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.18.20
	I0531 18:56:51.295322   35394 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.3-0
	I0531 18:56:51.295393   35394 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0531 18:56:51.295493   35394 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.18.20
	I0531 18:56:51.295406   35394 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.7
	I0531 18:56:51.296463   35394 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.18.20
	I0531 18:56:51.297086   35394 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.3-0
	I0531 18:56:51.297313   35394 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0531 18:56:51.297391   35394 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.18.20
	I0531 18:56:51.297489   35394 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.18.20
	I0531 18:56:51.297620   35394 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0531 18:56:51.297806   35394 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0531 18:56:51.298825   35394 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.7: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.7
	W0531 18:56:51.731367   35394 image.go:265] image registry.k8s.io/coredns:1.6.7 arch mismatch: want arm64 got amd64. fixing
	I0531 18:56:51.731638   35394 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.7
	W0531 18:56:51.746944   35394 image.go:265] image registry.k8s.io/kube-controller-manager:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I0531 18:56:51.747197   35394 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.18.20
	I0531 18:56:51.751050   35394 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	W0531 18:56:51.751432   35394 image.go:265] image registry.k8s.io/etcd:3.4.3-0 arch mismatch: want arm64 got amd64. fixing
	W0531 18:56:51.751732   35394 image.go:265] image registry.k8s.io/kube-scheduler:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I0531 18:56:51.751972   35394 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.3-0
	I0531 18:56:51.752050   35394 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.18.20
	W0531 18:56:51.753759   35394 image.go:265] image registry.k8s.io/kube-proxy:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I0531 18:56:51.754163   35394 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.18.20
	W0531 18:56:51.754343   35394 image.go:265] image registry.k8s.io/kube-apiserver:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I0531 18:56:51.754565   35394 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.18.20
	W0531 18:56:51.927678   35394 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0531 18:56:51.927858   35394 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0531 18:56:51.938846   35394 cache_images.go:116] "registry.k8s.io/coredns:1.6.7" needs transfer: "registry.k8s.io/coredns:1.6.7" does not exist at hash "ff3af22d8878afc6985d3fec3e066d00ef431aa166c3a01ac58f1990adc92a2c" in container runtime
	I0531 18:56:51.938903   35394 cri.go:217] Removing image: registry.k8s.io/coredns:1.6.7
	I0531 18:56:51.938971   35394 ssh_runner.go:195] Run: which crictl
	I0531 18:56:51.939078   35394 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.18.20" needs transfer: "registry.k8s.io/kube-controller-manager:v1.18.20" does not exist at hash "297c79afbdb81ceb4cf857e0c54a0de7b6ce7ebe01e6cab68fc8baf342be3ea7" in container runtime
	I0531 18:56:51.939102   35394 cri.go:217] Removing image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0531 18:56:51.939146   35394 ssh_runner.go:195] Run: which crictl
	I0531 18:56:52.020210   35394 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.18.20" needs transfer: "registry.k8s.io/kube-scheduler:v1.18.20" does not exist at hash "177548d745cb87f773d02f41d453af2f2a1479dbe3c32e749cf6d8145c005e79" in container runtime
	I0531 18:56:52.020264   35394 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "2a060e2e7101d419352bf82c613158587400be743482d9a537ec4a9d1b4eb93c" in container runtime
	I0531 18:56:52.020291   35394 cri.go:217] Removing image: registry.k8s.io/pause:3.2
	I0531 18:56:52.020322   35394 cache_images.go:116] "registry.k8s.io/etcd:3.4.3-0" needs transfer: "registry.k8s.io/etcd:3.4.3-0" does not exist at hash "29dd247b2572efbe28fcaea3fef1c5d72593da59f7350e3f6d2e6618983f9c03" in container runtime
	I0531 18:56:52.020339   35394 cri.go:217] Removing image: registry.k8s.io/etcd:3.4.3-0
	I0531 18:56:52.020348   35394 ssh_runner.go:195] Run: which crictl
	I0531 18:56:52.020374   35394 ssh_runner.go:195] Run: which crictl
	I0531 18:56:52.020266   35394 cri.go:217] Removing image: registry.k8s.io/kube-scheduler:v1.18.20
	I0531 18:56:52.020450   35394 ssh_runner.go:195] Run: which crictl
	I0531 18:56:52.020450   35394 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.18.20" needs transfer: "registry.k8s.io/kube-proxy:v1.18.20" does not exist at hash "b11cdc97ac6ac4ef2b3b0662edbe16597084b17cbc8e3d61fcaf4ef827a7ed18" in container runtime
	I0531 18:56:52.020472   35394 cri.go:217] Removing image: registry.k8s.io/kube-proxy:v1.18.20
	I0531 18:56:52.020504   35394 ssh_runner.go:195] Run: which crictl
	I0531 18:56:52.020517   35394 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.18.20" needs transfer: "registry.k8s.io/kube-apiserver:v1.18.20" does not exist at hash "d353007847ec85700463981309a5846c8d9c93fbcd1323104266212926d68257" in container runtime
	I0531 18:56:52.020536   35394 cri.go:217] Removing image: registry.k8s.io/kube-apiserver:v1.18.20
	I0531 18:56:52.020557   35394 ssh_runner.go:195] Run: which crictl
	I0531 18:56:52.167606   35394 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0531 18:56:52.167685   35394 cri.go:217] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0531 18:56:52.167707   35394 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.7
	I0531 18:56:52.167764   35394 ssh_runner.go:195] Run: which crictl
	I0531 18:56:52.167822   35394 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.18.20
	I0531 18:56:52.167932   35394 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.18.20
	I0531 18:56:52.167957   35394 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.3-0
	I0531 18:56:52.168013   35394 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.18.20
	I0531 18:56:52.168066   35394 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.18.20
	I0531 18:56:52.168092   35394 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0531 18:56:52.358547   35394 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16569-2389/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.3-0
	I0531 18:56:52.358625   35394 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16569-2389/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.6.7
	I0531 18:56:52.358675   35394 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0531 18:56:52.358776   35394 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16569-2389/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.18.20
	I0531 18:56:52.358816   35394 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16569-2389/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.18.20
	I0531 18:56:52.360353   35394 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16569-2389/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.18.20
	I0531 18:56:52.360428   35394 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16569-2389/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.18.20
	I0531 18:56:52.365603   35394 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16569-2389/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2
	I0531 18:56:52.416665   35394 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16569-2389/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0531 18:56:52.416743   35394 cache_images.go:92] LoadImages completed in 1.121794871s
	W0531 18:56:52.416807   35394 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/16569-2389/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.3-0: no such file or directory
	I0531 18:56:52.416874   35394 ssh_runner.go:195] Run: crio config
	I0531 18:56:52.474186   35394 cni.go:84] Creating CNI manager for ""
	I0531 18:56:52.474211   35394 cni.go:142] "docker" driver + "crio" runtime found, recommending kindnet
	I0531 18:56:52.474223   35394 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0531 18:56:52.474242   35394 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.18.20 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ingress-addon-legacy-546551 NodeName:ingress-addon-legacy-546551 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0531 18:56:52.474391   35394 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "ingress-addon-legacy-546551"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.18.20
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0531 18:56:52.474496   35394 kubeadm.go:971] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.18.20/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=ingress-addon-legacy-546551 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-546551 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0531 18:56:52.474568   35394 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.18.20
	I0531 18:56:52.485214   35394 binaries.go:44] Found k8s binaries, skipping transfer
	I0531 18:56:52.485286   35394 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0531 18:56:52.495851   35394 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (486 bytes)
	I0531 18:56:52.517802   35394 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (353 bytes)
	I0531 18:56:52.539359   35394 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0531 18:56:52.560865   35394 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0531 18:56:52.565509   35394 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0531 18:56:52.579266   35394 certs.go:56] Setting up /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/ingress-addon-legacy-546551 for IP: 192.168.49.2
	I0531 18:56:52.579341   35394 certs.go:190] acquiring lock for shared ca certs: {Name:mk0147accf8b8da231d39646bdc89fced67451cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 18:56:52.579541   35394 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/16569-2389/.minikube/ca.key
	I0531 18:56:52.579599   35394 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/16569-2389/.minikube/proxy-client-ca.key
	I0531 18:56:52.579650   35394 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/ingress-addon-legacy-546551/client.key
	I0531 18:56:52.579667   35394 crypto.go:68] Generating cert /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/ingress-addon-legacy-546551/client.crt with IP's: []
	I0531 18:56:53.634085   35394 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/ingress-addon-legacy-546551/client.crt ...
	I0531 18:56:53.634117   35394 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/ingress-addon-legacy-546551/client.crt: {Name:mk0730b27436c295641ddf146f794e4b22854320 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 18:56:53.634755   35394 crypto.go:164] Writing key to /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/ingress-addon-legacy-546551/client.key ...
	I0531 18:56:53.634778   35394 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/ingress-addon-legacy-546551/client.key: {Name:mk085f8cb9f77e7fc3e463efb19df7127821bc96 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 18:56:53.635271   35394 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/ingress-addon-legacy-546551/apiserver.key.dd3b5fb2
	I0531 18:56:53.635295   35394 crypto.go:68] Generating cert /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/ingress-addon-legacy-546551/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0531 18:56:54.299553   35394 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/ingress-addon-legacy-546551/apiserver.crt.dd3b5fb2 ...
	I0531 18:56:54.299588   35394 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/ingress-addon-legacy-546551/apiserver.crt.dd3b5fb2: {Name:mk008be5239f48679e819e23749c8f718bda5426 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 18:56:54.300257   35394 crypto.go:164] Writing key to /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/ingress-addon-legacy-546551/apiserver.key.dd3b5fb2 ...
	I0531 18:56:54.300274   35394 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/ingress-addon-legacy-546551/apiserver.key.dd3b5fb2: {Name:mk42b444eca94ec02e9d7d77dfcc200898e9e2f2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 18:56:54.300775   35394 certs.go:337] copying /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/ingress-addon-legacy-546551/apiserver.crt.dd3b5fb2 -> /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/ingress-addon-legacy-546551/apiserver.crt
	I0531 18:56:54.300866   35394 certs.go:341] copying /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/ingress-addon-legacy-546551/apiserver.key.dd3b5fb2 -> /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/ingress-addon-legacy-546551/apiserver.key
	I0531 18:56:54.300930   35394 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/ingress-addon-legacy-546551/proxy-client.key
	I0531 18:56:54.300946   35394 crypto.go:68] Generating cert /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/ingress-addon-legacy-546551/proxy-client.crt with IP's: []
	I0531 18:56:54.504533   35394 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/ingress-addon-legacy-546551/proxy-client.crt ...
	I0531 18:56:54.504566   35394 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/ingress-addon-legacy-546551/proxy-client.crt: {Name:mka504c7f43e2b2c718fbcf38456cd7f04e17e01 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 18:56:54.505623   35394 crypto.go:164] Writing key to /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/ingress-addon-legacy-546551/proxy-client.key ...
	I0531 18:56:54.505651   35394 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/ingress-addon-legacy-546551/proxy-client.key: {Name:mk7a5f062bcde42bbdec3f9d2112c8180a647576 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 18:56:54.506122   35394 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/ingress-addon-legacy-546551/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0531 18:56:54.506151   35394 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/ingress-addon-legacy-546551/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0531 18:56:54.506175   35394 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/ingress-addon-legacy-546551/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0531 18:56:54.506198   35394 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/ingress-addon-legacy-546551/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0531 18:56:54.506214   35394 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16569-2389/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0531 18:56:54.506225   35394 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16569-2389/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0531 18:56:54.506239   35394 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16569-2389/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0531 18:56:54.506251   35394 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16569-2389/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0531 18:56:54.506307   35394 certs.go:437] found cert: /home/jenkins/minikube-integration/16569-2389/.minikube/certs/home/jenkins/minikube-integration/16569-2389/.minikube/certs/7804.pem (1338 bytes)
	W0531 18:56:54.506353   35394 certs.go:433] ignoring /home/jenkins/minikube-integration/16569-2389/.minikube/certs/home/jenkins/minikube-integration/16569-2389/.minikube/certs/7804_empty.pem, impossibly tiny 0 bytes
	I0531 18:56:54.506388   35394 certs.go:437] found cert: /home/jenkins/minikube-integration/16569-2389/.minikube/certs/home/jenkins/minikube-integration/16569-2389/.minikube/certs/ca-key.pem (1675 bytes)
	I0531 18:56:54.506420   35394 certs.go:437] found cert: /home/jenkins/minikube-integration/16569-2389/.minikube/certs/home/jenkins/minikube-integration/16569-2389/.minikube/certs/ca.pem (1078 bytes)
	I0531 18:56:54.506455   35394 certs.go:437] found cert: /home/jenkins/minikube-integration/16569-2389/.minikube/certs/home/jenkins/minikube-integration/16569-2389/.minikube/certs/cert.pem (1123 bytes)
	I0531 18:56:54.506483   35394 certs.go:437] found cert: /home/jenkins/minikube-integration/16569-2389/.minikube/certs/home/jenkins/minikube-integration/16569-2389/.minikube/certs/key.pem (1679 bytes)
	I0531 18:56:54.506537   35394 certs.go:437] found cert: /home/jenkins/minikube-integration/16569-2389/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/16569-2389/.minikube/files/etc/ssl/certs/78042.pem (1708 bytes)
	I0531 18:56:54.506572   35394 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16569-2389/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0531 18:56:54.506588   35394 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16569-2389/.minikube/certs/7804.pem -> /usr/share/ca-certificates/7804.pem
	I0531 18:56:54.506603   35394 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16569-2389/.minikube/files/etc/ssl/certs/78042.pem -> /usr/share/ca-certificates/78042.pem
	I0531 18:56:54.507197   35394 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/ingress-addon-legacy-546551/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0531 18:56:54.536991   35394 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/ingress-addon-legacy-546551/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0531 18:56:54.566325   35394 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/ingress-addon-legacy-546551/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0531 18:56:54.596737   35394 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/ingress-addon-legacy-546551/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0531 18:56:54.627323   35394 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16569-2389/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0531 18:56:54.656896   35394 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16569-2389/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0531 18:56:54.685253   35394 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16569-2389/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0531 18:56:54.713871   35394 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16569-2389/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0531 18:56:54.742084   35394 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16569-2389/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0531 18:56:54.771464   35394 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16569-2389/.minikube/certs/7804.pem --> /usr/share/ca-certificates/7804.pem (1338 bytes)
	I0531 18:56:54.801706   35394 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16569-2389/.minikube/files/etc/ssl/certs/78042.pem --> /usr/share/ca-certificates/78042.pem (1708 bytes)
	I0531 18:56:54.831036   35394 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0531 18:56:54.852563   35394 ssh_runner.go:195] Run: openssl version
	I0531 18:56:54.859703   35394 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0531 18:56:54.872495   35394 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0531 18:56:54.877922   35394 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 May 31 18:45 /usr/share/ca-certificates/minikubeCA.pem
	I0531 18:56:54.877989   35394 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0531 18:56:54.886603   35394 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0531 18:56:54.898232   35394 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7804.pem && ln -fs /usr/share/ca-certificates/7804.pem /etc/ssl/certs/7804.pem"
	I0531 18:56:54.910129   35394 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7804.pem
	I0531 18:56:54.914906   35394 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 May 31 18:52 /usr/share/ca-certificates/7804.pem
	I0531 18:56:54.914973   35394 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7804.pem
	I0531 18:56:54.923622   35394 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7804.pem /etc/ssl/certs/51391683.0"
	I0531 18:56:54.935493   35394 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/78042.pem && ln -fs /usr/share/ca-certificates/78042.pem /etc/ssl/certs/78042.pem"
	I0531 18:56:54.947095   35394 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/78042.pem
	I0531 18:56:54.951946   35394 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 May 31 18:52 /usr/share/ca-certificates/78042.pem
	I0531 18:56:54.952023   35394 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/78042.pem
	I0531 18:56:54.960996   35394 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/78042.pem /etc/ssl/certs/3ec20f2e.0"
	I0531 18:56:54.973022   35394 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0531 18:56:54.977732   35394 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0531 18:56:54.977832   35394 kubeadm.go:404] StartCluster: {Name:ingress-addon-legacy-546551 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-546551 Namespace:default APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimi
zations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0531 18:56:54.977925   35394 cri.go:53] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0531 18:56:54.977986   35394 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0531 18:56:55.025278   35394 cri.go:88] found id: ""
	I0531 18:56:55.025351   35394 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0531 18:56:55.036962   35394 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0531 18:56:55.048200   35394 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0531 18:56:55.048319   35394 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0531 18:56:55.059512   35394 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0531 18:56:55.059576   35394 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0531 18:56:55.116921   35394 kubeadm.go:322] W0531 18:56:55.116322    1239 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	I0531 18:56:55.175853   35394 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1037-aws\n", err: exit status 1
	I0531 18:56:55.266545   35394 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0531 18:57:00.687244   35394 kubeadm.go:322] W0531 18:57:00.686891    1239 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0531 18:57:00.688753   35394 kubeadm.go:322] W0531 18:57:00.688376    1239 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0531 18:57:14.172821   35394 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
	I0531 18:57:14.172876   35394 kubeadm.go:322] [preflight] Running pre-flight checks
	I0531 18:57:14.172960   35394 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I0531 18:57:14.173011   35394 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1037-aws
	I0531 18:57:14.173044   35394 kubeadm.go:322] OS: Linux
	I0531 18:57:14.173089   35394 kubeadm.go:322] CGROUPS_CPU: enabled
	I0531 18:57:14.173134   35394 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I0531 18:57:14.173179   35394 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I0531 18:57:14.173224   35394 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I0531 18:57:14.173269   35394 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I0531 18:57:14.173314   35394 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I0531 18:57:14.173382   35394 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0531 18:57:14.173470   35394 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0531 18:57:14.173557   35394 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0531 18:57:14.173663   35394 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0531 18:57:14.173742   35394 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0531 18:57:14.173778   35394 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0531 18:57:14.173839   35394 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0531 18:57:14.176045   35394 out.go:204]   - Generating certificates and keys ...
	I0531 18:57:14.176133   35394 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0531 18:57:14.176199   35394 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0531 18:57:14.176300   35394 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0531 18:57:14.176365   35394 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0531 18:57:14.176422   35394 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0531 18:57:14.176474   35394 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0531 18:57:14.176534   35394 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0531 18:57:14.176690   35394 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-546551 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0531 18:57:14.176760   35394 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0531 18:57:14.176906   35394 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-546551 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0531 18:57:14.176972   35394 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0531 18:57:14.177038   35394 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0531 18:57:14.177080   35394 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0531 18:57:14.177132   35394 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0531 18:57:14.177183   35394 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0531 18:57:14.177232   35394 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0531 18:57:14.177290   35394 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0531 18:57:14.177341   35394 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0531 18:57:14.177402   35394 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0531 18:57:14.179233   35394 out.go:204]   - Booting up control plane ...
	I0531 18:57:14.179318   35394 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0531 18:57:14.179390   35394 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0531 18:57:14.179452   35394 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0531 18:57:14.179528   35394 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0531 18:57:14.179671   35394 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0531 18:57:14.179743   35394 kubeadm.go:322] [apiclient] All control plane components are healthy after 12.002595 seconds
	I0531 18:57:14.179842   35394 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0531 18:57:14.179968   35394 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster
	I0531 18:57:14.180023   35394 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0531 18:57:14.180148   35394 kubeadm.go:322] [mark-control-plane] Marking the node ingress-addon-legacy-546551 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I0531 18:57:14.180200   35394 kubeadm.go:322] [bootstrap-token] Using token: q5tdon.pm2p4ebcij7lfq5o
	I0531 18:57:14.182117   35394 out.go:204]   - Configuring RBAC rules ...
	I0531 18:57:14.182226   35394 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0531 18:57:14.182305   35394 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0531 18:57:14.182437   35394 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0531 18:57:14.182556   35394 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0531 18:57:14.182663   35394 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0531 18:57:14.182800   35394 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0531 18:57:14.182909   35394 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0531 18:57:14.182980   35394 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0531 18:57:14.183024   35394 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0531 18:57:14.183028   35394 kubeadm.go:322] 
	I0531 18:57:14.183084   35394 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0531 18:57:14.183087   35394 kubeadm.go:322] 
	I0531 18:57:14.183159   35394 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0531 18:57:14.183163   35394 kubeadm.go:322] 
	I0531 18:57:14.183186   35394 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0531 18:57:14.183241   35394 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0531 18:57:14.183289   35394 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0531 18:57:14.183292   35394 kubeadm.go:322] 
	I0531 18:57:14.183341   35394 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0531 18:57:14.183411   35394 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0531 18:57:14.183475   35394 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0531 18:57:14.183479   35394 kubeadm.go:322] 
	I0531 18:57:14.183557   35394 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0531 18:57:14.183629   35394 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0531 18:57:14.183633   35394 kubeadm.go:322] 
	I0531 18:57:14.183712   35394 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token q5tdon.pm2p4ebcij7lfq5o \
	I0531 18:57:14.183811   35394 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:8cab06d0df96335aa364fa490e6822dfb6067993e3a4e02e6ce54947f37c8db2 \
	I0531 18:57:14.183833   35394 kubeadm.go:322]     --control-plane 
	I0531 18:57:14.183837   35394 kubeadm.go:322] 
	I0531 18:57:14.183916   35394 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0531 18:57:14.183920   35394 kubeadm.go:322] 
	I0531 18:57:14.183996   35394 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token q5tdon.pm2p4ebcij7lfq5o \
	I0531 18:57:14.184102   35394 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:8cab06d0df96335aa364fa490e6822dfb6067993e3a4e02e6ce54947f37c8db2 
	I0531 18:57:14.184109   35394 cni.go:84] Creating CNI manager for ""
	I0531 18:57:14.184116   35394 cni.go:142] "docker" driver + "crio" runtime found, recommending kindnet
	I0531 18:57:14.186027   35394 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0531 18:57:14.187856   35394 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0531 18:57:14.194216   35394 cni.go:181] applying CNI manifest using /var/lib/minikube/binaries/v1.18.20/kubectl ...
	I0531 18:57:14.194234   35394 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0531 18:57:14.215675   35394 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0531 18:57:14.646097   35394 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:57:14.646218   35394 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl label nodes minikube.k8s.io/version=v1.30.1 minikube.k8s.io/commit=7022875d4a054c2d518e5e5a7b9d500799d50140 minikube.k8s.io/name=ingress-addon-legacy-546551 minikube.k8s.io/updated_at=2023_05_31T18_57_14_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:57:14.646022   35394 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0531 18:57:14.780660   35394 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:57:14.799384   35394 ops.go:34] apiserver oom_adj: -16
	I0531 18:57:15.383997   35394 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:57:15.883425   35394 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:57:16.384041   35394 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:57:16.884231   35394 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:57:17.384138   35394 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:57:17.883474   35394 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:57:18.383932   35394 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:57:18.883653   35394 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:57:19.384105   35394 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:57:19.883527   35394 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:57:20.383930   35394 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:57:20.884294   35394 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:57:21.384041   35394 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:57:21.884318   35394 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:57:22.383900   35394 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:57:22.884120   35394 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:57:23.383541   35394 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:57:23.883599   35394 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:57:24.383501   35394 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:57:24.884334   35394 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:57:25.383934   35394 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:57:25.884023   35394 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:57:26.384475   35394 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:57:26.883936   35394 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:57:27.384096   35394 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:57:27.883900   35394 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:57:28.383509   35394 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:57:28.884138   35394 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:57:29.383915   35394 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:57:29.487206   35394 kubeadm.go:1076] duration metric: took 14.841154619s to wait for elevateKubeSystemPrivileges.
	I0531 18:57:29.487234   35394 kubeadm.go:406] StartCluster complete in 34.509414036s
	I0531 18:57:29.487249   35394 settings.go:142] acquiring lock: {Name:mk7112454687e7bda5617b0aa762b583179f0f78 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 18:57:29.487330   35394 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/16569-2389/kubeconfig
	I0531 18:57:29.488026   35394 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16569-2389/kubeconfig: {Name:mk0c7b1a200a0a97aa7bf4307790fd99336ec425 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 18:57:29.488718   35394 kapi.go:59] client config for ingress-addon-legacy-546551: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/16569-2389/.minikube/profiles/ingress-addon-legacy-546551/client.crt", KeyFile:"/home/jenkins/minikube-integration/16569-2389/.minikube/profiles/ingress-addon-legacy-546551/client.key", CAFile:"/home/jenkins/minikube-integration/16569-2389/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(ni
l), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x13dde10), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0531 18:57:29.490082   35394 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0531 18:57:29.490363   35394 config.go:182] Loaded profile config "ingress-addon-legacy-546551": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.20
	I0531 18:57:29.490395   35394 addons.go:496] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0531 18:57:29.490448   35394 addons.go:66] Setting storage-provisioner=true in profile "ingress-addon-legacy-546551"
	I0531 18:57:29.490464   35394 addons.go:228] Setting addon storage-provisioner=true in "ingress-addon-legacy-546551"
	I0531 18:57:29.490496   35394 host.go:66] Checking if "ingress-addon-legacy-546551" exists ...
	I0531 18:57:29.490675   35394 cert_rotation.go:137] Starting client certificate rotation controller
	I0531 18:57:29.490720   35394 addons.go:66] Setting default-storageclass=true in profile "ingress-addon-legacy-546551"
	I0531 18:57:29.490751   35394 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ingress-addon-legacy-546551"
	I0531 18:57:29.491024   35394 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-546551 --format={{.State.Status}}
	I0531 18:57:29.491029   35394 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-546551 --format={{.State.Status}}
	I0531 18:57:29.521926   35394 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0531 18:57:29.523949   35394 addons.go:420] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0531 18:57:29.523968   35394 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0531 18:57:29.524037   35394 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-546551
	I0531 18:57:29.541700   35394 kapi.go:59] client config for ingress-addon-legacy-546551: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/16569-2389/.minikube/profiles/ingress-addon-legacy-546551/client.crt", KeyFile:"/home/jenkins/minikube-integration/16569-2389/.minikube/profiles/ingress-addon-legacy-546551/client.key", CAFile:"/home/jenkins/minikube-integration/16569-2389/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(ni
l), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x13dde10), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0531 18:57:29.550236   35394 addons.go:228] Setting addon default-storageclass=true in "ingress-addon-legacy-546551"
	I0531 18:57:29.550279   35394 host.go:66] Checking if "ingress-addon-legacy-546551" exists ...
	I0531 18:57:29.551052   35394 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-546551 --format={{.State.Status}}
	I0531 18:57:29.570829   35394 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/16569-2389/.minikube/machines/ingress-addon-legacy-546551/id_rsa Username:docker}
	I0531 18:57:29.592628   35394 addons.go:420] installing /etc/kubernetes/addons/storageclass.yaml
	I0531 18:57:29.592647   35394 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0531 18:57:29.592710   35394 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-546551
	I0531 18:57:29.617815   35394 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/16569-2389/.minikube/machines/ingress-addon-legacy-546551/id_rsa Username:docker}
	I0531 18:57:29.737066   35394 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0531 18:57:29.773960   35394 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0531 18:57:29.821636   35394 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0531 18:57:30.066796   35394 kapi.go:248] "coredns" deployment in "kube-system" namespace and "ingress-addon-legacy-546551" context rescaled to 1 replicas
	I0531 18:57:30.066888   35394 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0531 18:57:30.069410   35394 out.go:177] * Verifying Kubernetes components...
	I0531 18:57:30.072612   35394 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0531 18:57:30.149850   35394 start.go:916] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0531 18:57:30.256584   35394 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0531 18:57:30.255352   35394 kapi.go:59] client config for ingress-addon-legacy-546551: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/16569-2389/.minikube/profiles/ingress-addon-legacy-546551/client.crt", KeyFile:"/home/jenkins/minikube-integration/16569-2389/.minikube/profiles/ingress-addon-legacy-546551/client.key", CAFile:"/home/jenkins/minikube-integration/16569-2389/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(ni
l), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x13dde10), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0531 18:57:30.258698   35394 node_ready.go:35] waiting up to 6m0s for node "ingress-addon-legacy-546551" to be "Ready" ...
	I0531 18:57:30.258894   35394 addons.go:499] enable addons completed in 768.496281ms: enabled=[storage-provisioner default-storageclass]
	I0531 18:57:32.270924   35394 node_ready.go:58] node "ingress-addon-legacy-546551" has status "Ready":"False"
	I0531 18:57:34.271198   35394 node_ready.go:58] node "ingress-addon-legacy-546551" has status "Ready":"False"
	I0531 18:57:36.770309   35394 node_ready.go:58] node "ingress-addon-legacy-546551" has status "Ready":"False"
	I0531 18:57:39.270868   35394 node_ready.go:58] node "ingress-addon-legacy-546551" has status "Ready":"False"
	I0531 18:57:41.770837   35394 node_ready.go:58] node "ingress-addon-legacy-546551" has status "Ready":"False"
	I0531 18:57:44.270770   35394 node_ready.go:58] node "ingress-addon-legacy-546551" has status "Ready":"False"
	I0531 18:57:46.770118   35394 node_ready.go:58] node "ingress-addon-legacy-546551" has status "Ready":"False"
	I0531 18:57:48.770148   35394 node_ready.go:58] node "ingress-addon-legacy-546551" has status "Ready":"False"
	I0531 18:57:50.770884   35394 node_ready.go:58] node "ingress-addon-legacy-546551" has status "Ready":"False"
	I0531 18:57:53.271104   35394 node_ready.go:58] node "ingress-addon-legacy-546551" has status "Ready":"False"
	I0531 18:57:55.770337   35394 node_ready.go:58] node "ingress-addon-legacy-546551" has status "Ready":"False"
	I0531 18:57:58.271042   35394 node_ready.go:58] node "ingress-addon-legacy-546551" has status "Ready":"False"
	I0531 18:58:00.775779   35394 node_ready.go:58] node "ingress-addon-legacy-546551" has status "Ready":"False"
	I0531 18:58:03.271229   35394 node_ready.go:58] node "ingress-addon-legacy-546551" has status "Ready":"False"
	I0531 18:58:05.770244   35394 node_ready.go:58] node "ingress-addon-legacy-546551" has status "Ready":"False"
	I0531 18:58:07.770781   35394 node_ready.go:49] node "ingress-addon-legacy-546551" has status "Ready":"True"
	I0531 18:58:07.770861   35394 node_ready.go:38] duration metric: took 37.512137731s waiting for node "ingress-addon-legacy-546551" to be "Ready" ...
	I0531 18:58:07.770877   35394 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0531 18:58:07.778125   35394 pod_ready.go:78] waiting up to 6m0s for pod "coredns-66bff467f8-v2f7s" in "kube-system" namespace to be "Ready" ...
	I0531 18:58:09.786609   35394 pod_ready.go:102] pod "coredns-66bff467f8-v2f7s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-05-31 18:57:29 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[] Resize:}
	I0531 18:58:11.793408   35394 pod_ready.go:102] pod "coredns-66bff467f8-v2f7s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-05-31 18:58:11 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[] Resize:}
	I0531 18:58:14.286281   35394 pod_ready.go:102] pod "coredns-66bff467f8-v2f7s" in "kube-system" namespace has status "Ready":"False"
	I0531 18:58:16.286422   35394 pod_ready.go:102] pod "coredns-66bff467f8-v2f7s" in "kube-system" namespace has status "Ready":"False"
	I0531 18:58:18.287172   35394 pod_ready.go:102] pod "coredns-66bff467f8-v2f7s" in "kube-system" namespace has status "Ready":"False"
	I0531 18:58:18.786799   35394 pod_ready.go:92] pod "coredns-66bff467f8-v2f7s" in "kube-system" namespace has status "Ready":"True"
	I0531 18:58:18.786827   35394 pod_ready.go:81] duration metric: took 11.008665023s waiting for pod "coredns-66bff467f8-v2f7s" in "kube-system" namespace to be "Ready" ...
	I0531 18:58:18.786840   35394 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ingress-addon-legacy-546551" in "kube-system" namespace to be "Ready" ...
	I0531 18:58:18.792309   35394 pod_ready.go:92] pod "etcd-ingress-addon-legacy-546551" in "kube-system" namespace has status "Ready":"True"
	I0531 18:58:18.792338   35394 pod_ready.go:81] duration metric: took 5.487962ms waiting for pod "etcd-ingress-addon-legacy-546551" in "kube-system" namespace to be "Ready" ...
	I0531 18:58:18.792354   35394 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ingress-addon-legacy-546551" in "kube-system" namespace to be "Ready" ...
	I0531 18:58:18.797974   35394 pod_ready.go:92] pod "kube-apiserver-ingress-addon-legacy-546551" in "kube-system" namespace has status "Ready":"True"
	I0531 18:58:18.798001   35394 pod_ready.go:81] duration metric: took 5.638655ms waiting for pod "kube-apiserver-ingress-addon-legacy-546551" in "kube-system" namespace to be "Ready" ...
	I0531 18:58:18.798017   35394 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ingress-addon-legacy-546551" in "kube-system" namespace to be "Ready" ...
	I0531 18:58:18.803128   35394 pod_ready.go:92] pod "kube-controller-manager-ingress-addon-legacy-546551" in "kube-system" namespace has status "Ready":"True"
	I0531 18:58:18.803151   35394 pod_ready.go:81] duration metric: took 5.126291ms waiting for pod "kube-controller-manager-ingress-addon-legacy-546551" in "kube-system" namespace to be "Ready" ...
	I0531 18:58:18.803163   35394 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-wxshz" in "kube-system" namespace to be "Ready" ...
	I0531 18:58:18.808374   35394 pod_ready.go:92] pod "kube-proxy-wxshz" in "kube-system" namespace has status "Ready":"True"
	I0531 18:58:18.808404   35394 pod_ready.go:81] duration metric: took 5.232292ms waiting for pod "kube-proxy-wxshz" in "kube-system" namespace to be "Ready" ...
	I0531 18:58:18.808417   35394 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ingress-addon-legacy-546551" in "kube-system" namespace to be "Ready" ...
	I0531 18:58:18.981856   35394 request.go:628] Waited for 173.337555ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ingress-addon-legacy-546551
	I0531 18:58:19.182045   35394 request.go:628] Waited for 197.388728ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ingress-addon-legacy-546551
	I0531 18:58:19.184793   35394 pod_ready.go:92] pod "kube-scheduler-ingress-addon-legacy-546551" in "kube-system" namespace has status "Ready":"True"
	I0531 18:58:19.184859   35394 pod_ready.go:81] duration metric: took 376.400934ms waiting for pod "kube-scheduler-ingress-addon-legacy-546551" in "kube-system" namespace to be "Ready" ...
	I0531 18:58:19.184879   35394 pod_ready.go:38] duration metric: took 11.413988884s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0531 18:58:19.184896   35394 api_server.go:52] waiting for apiserver process to appear ...
	I0531 18:58:19.184961   35394 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 18:58:19.198330   35394 api_server.go:72] duration metric: took 49.131396073s to wait for apiserver process to appear ...
	I0531 18:58:19.198357   35394 api_server.go:88] waiting for apiserver healthz status ...
	I0531 18:58:19.198375   35394 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0531 18:58:19.207858   35394 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0531 18:58:19.208839   35394 api_server.go:141] control plane version: v1.18.20
	I0531 18:58:19.208866   35394 api_server.go:131] duration metric: took 10.497995ms to wait for apiserver health ...
	I0531 18:58:19.208875   35394 system_pods.go:43] waiting for kube-system pods to appear ...
	I0531 18:58:19.382269   35394 request.go:628] Waited for 173.331196ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0531 18:58:19.388339   35394 system_pods.go:59] 8 kube-system pods found
	I0531 18:58:19.388378   35394 system_pods.go:61] "coredns-66bff467f8-v2f7s" [315441c8-555f-4fd0-9578-0dec74b27a00] Running
	I0531 18:58:19.388384   35394 system_pods.go:61] "etcd-ingress-addon-legacy-546551" [01c9b918-6719-41cf-81e1-f824cddf7835] Running
	I0531 18:58:19.388389   35394 system_pods.go:61] "kindnet-flrvc" [cb53f98b-74d0-4bd8-9f67-bd6aef525bcf] Running
	I0531 18:58:19.388394   35394 system_pods.go:61] "kube-apiserver-ingress-addon-legacy-546551" [97ff7ac1-fe18-4628-bfbc-ac380ad5c2b7] Running
	I0531 18:58:19.388404   35394 system_pods.go:61] "kube-controller-manager-ingress-addon-legacy-546551" [cc69ceae-34d1-458f-8fd8-5cf0e11bd564] Running
	I0531 18:58:19.388408   35394 system_pods.go:61] "kube-proxy-wxshz" [8fc681c0-46cc-4663-a9ee-e0635e21a957] Running
	I0531 18:58:19.388413   35394 system_pods.go:61] "kube-scheduler-ingress-addon-legacy-546551" [d9afa9e6-1f00-42e2-82a4-19568d0958c7] Running
	I0531 18:58:19.388419   35394 system_pods.go:61] "storage-provisioner" [053afc16-bae4-4337-9364-9c39698ca30a] Running
	I0531 18:58:19.388424   35394 system_pods.go:74] duration metric: took 179.544558ms to wait for pod list to return data ...
	I0531 18:58:19.388433   35394 default_sa.go:34] waiting for default service account to be created ...
	I0531 18:58:19.582015   35394 request.go:628] Waited for 193.516332ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/default/serviceaccounts
	I0531 18:58:19.584778   35394 default_sa.go:45] found service account: "default"
	I0531 18:58:19.584803   35394 default_sa.go:55] duration metric: took 196.364041ms for default service account to be created ...
	I0531 18:58:19.584816   35394 system_pods.go:116] waiting for k8s-apps to be running ...
	I0531 18:58:19.782212   35394 request.go:628] Waited for 197.309968ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0531 18:58:19.789008   35394 system_pods.go:86] 8 kube-system pods found
	I0531 18:58:19.789045   35394 system_pods.go:89] "coredns-66bff467f8-v2f7s" [315441c8-555f-4fd0-9578-0dec74b27a00] Running
	I0531 18:58:19.789054   35394 system_pods.go:89] "etcd-ingress-addon-legacy-546551" [01c9b918-6719-41cf-81e1-f824cddf7835] Running
	I0531 18:58:19.789082   35394 system_pods.go:89] "kindnet-flrvc" [cb53f98b-74d0-4bd8-9f67-bd6aef525bcf] Running
	I0531 18:58:19.789106   35394 system_pods.go:89] "kube-apiserver-ingress-addon-legacy-546551" [97ff7ac1-fe18-4628-bfbc-ac380ad5c2b7] Running
	I0531 18:58:19.789113   35394 system_pods.go:89] "kube-controller-manager-ingress-addon-legacy-546551" [cc69ceae-34d1-458f-8fd8-5cf0e11bd564] Running
	I0531 18:58:19.789121   35394 system_pods.go:89] "kube-proxy-wxshz" [8fc681c0-46cc-4663-a9ee-e0635e21a957] Running
	I0531 18:58:19.789136   35394 system_pods.go:89] "kube-scheduler-ingress-addon-legacy-546551" [d9afa9e6-1f00-42e2-82a4-19568d0958c7] Running
	I0531 18:58:19.789141   35394 system_pods.go:89] "storage-provisioner" [053afc16-bae4-4337-9364-9c39698ca30a] Running
	I0531 18:58:19.789159   35394 system_pods.go:126] duration metric: took 204.327506ms to wait for k8s-apps to be running ...
	I0531 18:58:19.789174   35394 system_svc.go:44] waiting for kubelet service to be running ....
	I0531 18:58:19.789253   35394 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0531 18:58:19.806486   35394 system_svc.go:56] duration metric: took 17.301308ms WaitForService to wait for kubelet.
	I0531 18:58:19.806527   35394 kubeadm.go:581] duration metric: took 49.739593884s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0531 18:58:19.806571   35394 node_conditions.go:102] verifying NodePressure condition ...
	I0531 18:58:19.981839   35394 request.go:628] Waited for 175.174066ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes
	I0531 18:58:19.984812   35394 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0531 18:58:19.984845   35394 node_conditions.go:123] node cpu capacity is 2
	I0531 18:58:19.984857   35394 node_conditions.go:105] duration metric: took 178.275919ms to run NodePressure ...
	I0531 18:58:19.984868   35394 start.go:228] waiting for startup goroutines ...
	I0531 18:58:19.984875   35394 start.go:233] waiting for cluster config update ...
	I0531 18:58:19.984884   35394 start.go:242] writing updated cluster config ...
	I0531 18:58:19.985188   35394 ssh_runner.go:195] Run: rm -f paused
	I0531 18:58:20.048817   35394 start.go:573] kubectl: 1.27.2, cluster: 1.18.20 (minor skew: 9)
	I0531 18:58:20.051396   35394 out.go:177] 
	W0531 18:58:20.053664   35394 out.go:239] ! /usr/local/bin/kubectl is version 1.27.2, which may have incompatibilities with Kubernetes 1.18.20.
	I0531 18:58:20.055573   35394 out.go:177]   - Want kubectl v1.18.20? Try 'minikube kubectl -- get pods -A'
	I0531 18:58:20.057762   35394 out.go:177] * Done! kubectl is now configured to use "ingress-addon-legacy-546551" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* May 31 19:01:20 ingress-addon-legacy-546551 crio[904]: time="2023-05-31 19:01:20.507694507Z" level=info msg="Stopped container ba62a1a16a2b5708d45fa600abbde700af378fcc95f7a776cd98a4a1658f2699: ingress-nginx/ingress-nginx-controller-7fcf777cb7-8pk64/controller" id=0f3496d6-e25c-4b5d-9c9b-af8207ca0df7 name=/runtime.v1alpha2.RuntimeService/StopContainer
	May 31 19:01:20 ingress-addon-legacy-546551 crio[904]: time="2023-05-31 19:01:20.509959224Z" level=info msg="Stopped container ba62a1a16a2b5708d45fa600abbde700af378fcc95f7a776cd98a4a1658f2699: ingress-nginx/ingress-nginx-controller-7fcf777cb7-8pk64/controller" id=7f43e2d1-6169-44f9-aba1-0a2b7a1c2d0b name=/runtime.v1alpha2.RuntimeService/StopContainer
	May 31 19:01:20 ingress-addon-legacy-546551 crio[904]: time="2023-05-31 19:01:20.510181155Z" level=info msg="Stopping pod sandbox: 3b6a80e44fc77d158dc0738dc1515c3f540bfaf18f3705b9a86612c2b90728a0" id=34b4d958-2281-4560-bfb8-00871e9ac112 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	May 31 19:01:20 ingress-addon-legacy-546551 crio[904]: time="2023-05-31 19:01:20.513490834Z" level=info msg="Restoring iptables rules: *nat\n:KUBE-HP-DEBHA7Z3YMPDQCA4 - [0:0]\n:KUBE-HP-BWF5MN2B33Y6FP3F - [0:0]\n:KUBE-HOSTPORTS - [0:0]\n-X KUBE-HP-DEBHA7Z3YMPDQCA4\n-X KUBE-HP-BWF5MN2B33Y6FP3F\nCOMMIT\n"
	May 31 19:01:20 ingress-addon-legacy-546551 crio[904]: time="2023-05-31 19:01:20.519515688Z" level=info msg="Stopping pod sandbox: 3b6a80e44fc77d158dc0738dc1515c3f540bfaf18f3705b9a86612c2b90728a0" id=570b80d2-a247-4f8e-a0c0-8cb81a435fa2 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	May 31 19:01:20 ingress-addon-legacy-546551 crio[904]: time="2023-05-31 19:01:20.520257433Z" level=info msg="Closing host port tcp:80"
	May 31 19:01:20 ingress-addon-legacy-546551 crio[904]: time="2023-05-31 19:01:20.520300460Z" level=info msg="Closing host port tcp:443"
	May 31 19:01:20 ingress-addon-legacy-546551 crio[904]: time="2023-05-31 19:01:20.521862948Z" level=info msg="Host port tcp:80 does not have an open socket"
	May 31 19:01:20 ingress-addon-legacy-546551 crio[904]: time="2023-05-31 19:01:20.521894545Z" level=info msg="Host port tcp:443 does not have an open socket"
	May 31 19:01:20 ingress-addon-legacy-546551 crio[904]: time="2023-05-31 19:01:20.522062585Z" level=info msg="Got pod network &{Name:ingress-nginx-controller-7fcf777cb7-8pk64 Namespace:ingress-nginx ID:3b6a80e44fc77d158dc0738dc1515c3f540bfaf18f3705b9a86612c2b90728a0 UID:2847ccee-57c8-4113-b170-7bd9fe2494fa NetNS:/var/run/netns/10d69b9f-4609-4ba6-83a0-16dc3e915451 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	May 31 19:01:20 ingress-addon-legacy-546551 crio[904]: time="2023-05-31 19:01:20.522209587Z" level=info msg="Deleting pod ingress-nginx_ingress-nginx-controller-7fcf777cb7-8pk64 from CNI network \"kindnet\" (type=ptp)"
	May 31 19:01:20 ingress-addon-legacy-546551 crio[904]: time="2023-05-31 19:01:20.552348624Z" level=info msg="Stopped pod sandbox: 3b6a80e44fc77d158dc0738dc1515c3f540bfaf18f3705b9a86612c2b90728a0" id=34b4d958-2281-4560-bfb8-00871e9ac112 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	May 31 19:01:20 ingress-addon-legacy-546551 crio[904]: time="2023-05-31 19:01:20.552465341Z" level=info msg="Stopped pod sandbox (already stopped): 3b6a80e44fc77d158dc0738dc1515c3f540bfaf18f3705b9a86612c2b90728a0" id=570b80d2-a247-4f8e-a0c0-8cb81a435fa2 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	May 31 19:01:21 ingress-addon-legacy-546551 crio[904]: time="2023-05-31 19:01:21.577645124Z" level=info msg="Checking image status: gcr.io/google-samples/hello-app:1.0" id=6688464c-9c20-46ad-a490-cb95933643b9 name=/runtime.v1alpha2.ImageService/ImageStatus
	May 31 19:01:21 ingress-addon-legacy-546551 crio[904]: time="2023-05-31 19:01:21.577856675Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:13753a81eccfdd153bf7fc9a4c9198edbcce0110e7f46ed0d38cc654a6458ff5,RepoTags:[gcr.io/google-samples/hello-app:1.0],RepoDigests:[gcr.io/google-samples/hello-app@sha256:845f77fab71033404f4cfceaa1ddb27b70c3551ceb22a5e7f4498cdda6c9daea],Size_:28496999,Uid:nil,Username:nonroot,Spec:nil,},Info:map[string]string{},}" id=6688464c-9c20-46ad-a490-cb95933643b9 name=/runtime.v1alpha2.ImageService/ImageStatus
	May 31 19:01:21 ingress-addon-legacy-546551 crio[904]: time="2023-05-31 19:01:21.578565682Z" level=info msg="Checking image status: gcr.io/google-samples/hello-app:1.0" id=c11695f5-a18b-46f1-a9cb-ba613f93259a name=/runtime.v1alpha2.ImageService/ImageStatus
	May 31 19:01:21 ingress-addon-legacy-546551 crio[904]: time="2023-05-31 19:01:21.578779415Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:13753a81eccfdd153bf7fc9a4c9198edbcce0110e7f46ed0d38cc654a6458ff5,RepoTags:[gcr.io/google-samples/hello-app:1.0],RepoDigests:[gcr.io/google-samples/hello-app@sha256:845f77fab71033404f4cfceaa1ddb27b70c3551ceb22a5e7f4498cdda6c9daea],Size_:28496999,Uid:nil,Username:nonroot,Spec:nil,},Info:map[string]string{},}" id=c11695f5-a18b-46f1-a9cb-ba613f93259a name=/runtime.v1alpha2.ImageService/ImageStatus
	May 31 19:01:21 ingress-addon-legacy-546551 crio[904]: time="2023-05-31 19:01:21.579662352Z" level=info msg="Creating container: default/hello-world-app-5f5d8b66bb-4hq8x/hello-world-app" id=4c0d884d-6e56-4afd-b4aa-50488826cfbb name=/runtime.v1alpha2.RuntimeService/CreateContainer
	May 31 19:01:21 ingress-addon-legacy-546551 crio[904]: time="2023-05-31 19:01:21.579759361Z" level=warning msg="Allowed annotations are specified for workload []"
	May 31 19:01:21 ingress-addon-legacy-546551 crio[904]: time="2023-05-31 19:01:21.678778166Z" level=info msg="Created container 2480613c3044c1d3bca4dab208e811c8a58e2792726c0cc2248892c388451a32: default/hello-world-app-5f5d8b66bb-4hq8x/hello-world-app" id=4c0d884d-6e56-4afd-b4aa-50488826cfbb name=/runtime.v1alpha2.RuntimeService/CreateContainer
	May 31 19:01:21 ingress-addon-legacy-546551 crio[904]: time="2023-05-31 19:01:21.679648853Z" level=info msg="Starting container: 2480613c3044c1d3bca4dab208e811c8a58e2792726c0cc2248892c388451a32" id=370f7500-ae10-4131-8b38-5f5b83067d0f name=/runtime.v1alpha2.RuntimeService/StartContainer
	May 31 19:01:21 ingress-addon-legacy-546551 crio[904]: time="2023-05-31 19:01:21.694924475Z" level=info msg="Started container" PID=3658 containerID=2480613c3044c1d3bca4dab208e811c8a58e2792726c0cc2248892c388451a32 description=default/hello-world-app-5f5d8b66bb-4hq8x/hello-world-app id=370f7500-ae10-4131-8b38-5f5b83067d0f name=/runtime.v1alpha2.RuntimeService/StartContainer sandboxID=7bf8a2257b3890a49f83cf531477d759bc73ea06d6e1987966237f7c9d986653
	May 31 19:01:21 ingress-addon-legacy-546551 conmon[3647]: conmon 2480613c3044c1d3bca4 <ninfo>: container 3658 exited with status 1
	May 31 19:01:22 ingress-addon-legacy-546551 crio[904]: time="2023-05-31 19:01:22.209227910Z" level=info msg="Removing container: fc02ec35c3cae5401cceaab0ec2c389e79d6b5c6d45d37faecf8a06b05ef0450" id=b8a11cdf-7f0e-4bdb-9ceb-4327e5083bf5 name=/runtime.v1alpha2.RuntimeService/RemoveContainer
	May 31 19:01:22 ingress-addon-legacy-546551 crio[904]: time="2023-05-31 19:01:22.234655690Z" level=info msg="Removed container fc02ec35c3cae5401cceaab0ec2c389e79d6b5c6d45d37faecf8a06b05ef0450: default/hello-world-app-5f5d8b66bb-4hq8x/hello-world-app" id=b8a11cdf-7f0e-4bdb-9ceb-4327e5083bf5 name=/runtime.v1alpha2.RuntimeService/RemoveContainer
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	2480613c3044c       13753a81eccfdd153bf7fc9a4c9198edbcce0110e7f46ed0d38cc654a6458ff5                                                   4 seconds ago       Exited              hello-world-app           2                   7bf8a2257b389       hello-world-app-5f5d8b66bb-4hq8x
	5635b8c80e797       docker.io/library/nginx@sha256:203cba3f56d7dba1d66b95c091db65a4f0778eb5d16e76151e73e0413e317328                    2 minutes ago       Running             nginx                     0                   9eaabba56b61e       nginx
	ba62a1a16a2b5       registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324   2 minutes ago       Exited              controller                0                   3b6a80e44fc77       ingress-nginx-controller-7fcf777cb7-8pk64
	7fa6605bbf58b       a883f7fc35610a84d589cbb450eade9face1d1a8b2cbdafa1690cbffe68cfe88                                                   3 minutes ago       Exited              patch                     1                   334e3c7a42d43       ingress-nginx-admission-patch-wqlfz
	11018956388d1       docker.io/jettech/kube-webhook-certgen@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7     3 minutes ago       Exited              create                    0                   89153bfadaf71       ingress-nginx-admission-create-tnzs8
	07f9f7e11f6b2       6e17ba78cf3ebe1410fe828dc4ca57d3df37ad0b3c1a64161e5c27d57a24d184                                                   3 minutes ago       Running             coredns                   0                   2d2f160634891       coredns-66bff467f8-v2f7s
	12a1cb12a6a0f       gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2    3 minutes ago       Running             storage-provisioner       0                   404aa61fc750a       storage-provisioner
	432f96e63003a       docker.io/kindest/kindnetd@sha256:2c39858b71cf6c5737ff0daa8130a6574d4c6bd2a7dacaf002060c02f2bc1b4f                 3 minutes ago       Running             kindnet-cni               0                   7dd18f7b4592c       kindnet-flrvc
	62651d652bfa4       565297bc6f7d41fdb7a8ac7f9d75617ef4e6efdd1b1e41af6e060e19c44c28a8                                                   3 minutes ago       Running             kube-proxy                0                   23270f8ab1461       kube-proxy-wxshz
	8d0a96acd3df4       095f37015706de6eedb4f57eb2f9a25a1e3bf4bec63d50ba73f8968ef4094fd1                                                   4 minutes ago       Running             kube-scheduler            0                   a03acd38d8353       kube-scheduler-ingress-addon-legacy-546551
	ed310a397b13c       2694cf044d66591c37b12c60ce1f1cdba3d271af5ebda43a2e4d32ebbadd97d0                                                   4 minutes ago       Running             kube-apiserver            0                   ec8c0af85c98e       kube-apiserver-ingress-addon-legacy-546551
	6732ce88ab515       ab707b0a0ea339254cc6e3f2e7d618d4793d5129acb2288e9194769271404952                                                   4 minutes ago       Running             etcd                      0                   efda18b3d0c26       etcd-ingress-addon-legacy-546551
	d6e85a0394605       68a4fac29a865f21217550dbd3570dc1adbc602cf05d6eeb6f060eec1359e1f1                                                   4 minutes ago       Running             kube-controller-manager   0                   64ea5b819fb48       kube-controller-manager-ingress-addon-legacy-546551
	
	* 
	* ==> coredns [07f9f7e11f6b2c55ca26b85e3a2373274f397b19d7a4237a818a7928479182f0] <==
	* [INFO] 10.244.0.5:40288 - 28642 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000064869s
	[INFO] 10.244.0.5:40288 - 44932 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000053596s
	[INFO] 10.244.0.5:40288 - 44582 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000045341s
	[INFO] 10.244.0.5:40288 - 22003 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000046761s
	[INFO] 10.244.0.5:40288 - 38430 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.000896172s
	[INFO] 10.244.0.5:40288 - 61704 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.000803241s
	[INFO] 10.244.0.5:40288 - 57530 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000102711s
	[INFO] 10.244.0.5:48487 - 8197 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000098444s
	[INFO] 10.244.0.5:48487 - 5507 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.00005065s
	[INFO] 10.244.0.5:48487 - 26214 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000067979s
	[INFO] 10.244.0.5:55416 - 43173 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000035109s
	[INFO] 10.244.0.5:48487 - 18259 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000045842s
	[INFO] 10.244.0.5:55416 - 48638 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000029145s
	[INFO] 10.244.0.5:48487 - 6966 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.00004105s
	[INFO] 10.244.0.5:55416 - 59264 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000024222s
	[INFO] 10.244.0.5:48487 - 64765 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.00003821s
	[INFO] 10.244.0.5:55416 - 31263 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000026994s
	[INFO] 10.244.0.5:55416 - 64385 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000039885s
	[INFO] 10.244.0.5:55416 - 59322 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000058223s
	[INFO] 10.244.0.5:55416 - 5719 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001094734s
	[INFO] 10.244.0.5:48487 - 31555 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001691061s
	[INFO] 10.244.0.5:48487 - 60625 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001489947s
	[INFO] 10.244.0.5:48487 - 29788 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000073961s
	[INFO] 10.244.0.5:55416 - 57655 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001391536s
	[INFO] 10.244.0.5:55416 - 20240 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.00006985s
	
	* 
	* ==> describe nodes <==
	* Name:               ingress-addon-legacy-546551
	Roles:              master
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ingress-addon-legacy-546551
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7022875d4a054c2d518e5e5a7b9d500799d50140
	                    minikube.k8s.io/name=ingress-addon-legacy-546551
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_05_31T18_57_14_0700
	                    minikube.k8s.io/version=v1.30.1
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 31 May 2023 18:57:11 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ingress-addon-legacy-546551
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 31 May 2023 19:01:17 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 31 May 2023 19:01:17 +0000   Wed, 31 May 2023 18:57:06 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 31 May 2023 19:01:17 +0000   Wed, 31 May 2023 18:57:06 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 31 May 2023 19:01:17 +0000   Wed, 31 May 2023 18:57:06 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 31 May 2023 19:01:17 +0000   Wed, 31 May 2023 18:58:07 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ingress-addon-legacy-546551
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022624Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022624Ki
	  pods:               110
	System Info:
	  Machine ID:                 6f77d62d8f7347299db5c30f675feea9
	  System UUID:                1dcf2e1e-2709-48ae-ae1b-0517e3d59fae
	  Boot ID:                    35429d0f-2ece-432d-a992-d9f8cda99d9c
	  Kernel Version:             5.15.0-1037-aws
	  OS Image:                   Ubuntu 22.04.2 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.5
	  Kubelet Version:            v1.18.20
	  Kube-Proxy Version:         v1.18.20
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (10 in total)
	  Namespace                   Name                                                   CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                   ----                                                   ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-5f5d8b66bb-4hq8x                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         25s
	  default                     nginx                                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m44s
	  kube-system                 coredns-66bff467f8-v2f7s                               100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     3m57s
	  kube-system                 etcd-ingress-addon-legacy-546551                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m9s
	  kube-system                 kindnet-flrvc                                          100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      3m57s
	  kube-system                 kube-apiserver-ingress-addon-legacy-546551             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m9s
	  kube-system                 kube-controller-manager-ingress-addon-legacy-546551    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m9s
	  kube-system                 kube-proxy-wxshz                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m57s
	  kube-system                 kube-scheduler-ingress-addon-legacy-546551             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m9s
	  kube-system                 storage-provisioner                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m56s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             120Mi (1%!)(MISSING)  220Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  NodeHasSufficientMemory  4m22s (x5 over 4m23s)  kubelet     Node ingress-addon-legacy-546551 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m22s (x5 over 4m23s)  kubelet     Node ingress-addon-legacy-546551 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m22s (x4 over 4m23s)  kubelet     Node ingress-addon-legacy-546551 status is now: NodeHasSufficientPID
	  Normal  Starting                 4m9s                   kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m9s                   kubelet     Node ingress-addon-legacy-546551 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m9s                   kubelet     Node ingress-addon-legacy-546551 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m9s                   kubelet     Node ingress-addon-legacy-546551 status is now: NodeHasSufficientPID
	  Normal  Starting                 3m55s                  kube-proxy  Starting kube-proxy.
	  Normal  NodeReady                3m19s                  kubelet     Node ingress-addon-legacy-546551 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [  +0.000741] FS-Cache: N-cookie c=0000000c [p=00000003 fl=2 nc=0 na=1]
	[  +0.000954] FS-Cache: N-cookie d=0000000077ffde6d{9p.inode} n=0000000074ec7799
	[  +0.001241] FS-Cache: N-key=[8] '915b3b0000000000'
	[  +0.003042] FS-Cache: Duplicate cookie detected
	[  +0.000777] FS-Cache: O-cookie c=00000006 [p=00000003 fl=226 nc=0 na=1]
	[  +0.000979] FS-Cache: O-cookie d=0000000077ffde6d{9p.inode} n=0000000031e1563a
	[  +0.001057] FS-Cache: O-key=[8] '915b3b0000000000'
	[  +0.000743] FS-Cache: N-cookie c=0000000d [p=00000003 fl=2 nc=0 na=1]
	[  +0.000985] FS-Cache: N-cookie d=0000000077ffde6d{9p.inode} n=000000007278ef73
	[  +0.001110] FS-Cache: N-key=[8] '915b3b0000000000'
	[  +2.905928] FS-Cache: Duplicate cookie detected
	[  +0.000862] FS-Cache: O-cookie c=00000004 [p=00000003 fl=226 nc=0 na=1]
	[  +0.001154] FS-Cache: O-cookie d=0000000077ffde6d{9p.inode} n=00000000ad00c953
	[  +0.001219] FS-Cache: O-key=[8] '905b3b0000000000'
	[  +0.000792] FS-Cache: N-cookie c=0000000f [p=00000003 fl=2 nc=0 na=1]
	[  +0.001108] FS-Cache: N-cookie d=0000000077ffde6d{9p.inode} n=00000000be9b4fe0
	[  +0.001229] FS-Cache: N-key=[8] '905b3b0000000000'
	[  +0.280333] FS-Cache: Duplicate cookie detected
	[  +0.000717] FS-Cache: O-cookie c=00000009 [p=00000003 fl=226 nc=0 na=1]
	[  +0.000979] FS-Cache: O-cookie d=0000000077ffde6d{9p.inode} n=000000003fd4f91a
	[  +0.001109] FS-Cache: O-key=[8] '985b3b0000000000'
	[  +0.000717] FS-Cache: N-cookie c=00000010 [p=00000003 fl=2 nc=0 na=1]
	[  +0.000957] FS-Cache: N-cookie d=0000000077ffde6d{9p.inode} n=0000000074ec7799
	[  +0.001067] FS-Cache: N-key=[8] '985b3b0000000000'
	[  +9.760834] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	
	* 
	* ==> etcd [6732ce88ab51504c3576e391a2c7ecad02c353bfa64abdd8855e20346ddcae90] <==
	* raft2023/05/31 18:57:05 INFO: aec36adc501070cc became follower at term 0
	raft2023/05/31 18:57:05 INFO: newRaft aec36adc501070cc [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]
	raft2023/05/31 18:57:05 INFO: aec36adc501070cc became follower at term 1
	raft2023/05/31 18:57:05 INFO: aec36adc501070cc switched to configuration voters=(12593026477526642892)
	2023-05-31 18:57:05.394895 W | auth: simple token is not cryptographically signed
	2023-05-31 18:57:05.398225 I | etcdserver: starting server... [version: 3.4.3, cluster version: to_be_decided]
	2023-05-31 18:57:05.494894 I | etcdserver: aec36adc501070cc as single-node; fast-forwarding 9 ticks (election ticks 10)
	raft2023/05/31 18:57:05 INFO: aec36adc501070cc switched to configuration voters=(12593026477526642892)
	2023-05-31 18:57:05.680521 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2023-05-31 18:57:05.680927 I | embed: listening for metrics on http://127.0.0.1:2381
	2023-05-31 18:57:05.680967 I | embed: listening for peers on 192.168.49.2:2380
	2023-05-31 18:57:05.680983 I | etcdserver/membership: added member aec36adc501070cc [https://192.168.49.2:2380] to cluster fa54960ea34d58be
	raft2023/05/31 18:57:06 INFO: aec36adc501070cc is starting a new election at term 1
	raft2023/05/31 18:57:06 INFO: aec36adc501070cc became candidate at term 2
	raft2023/05/31 18:57:06 INFO: aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2
	raft2023/05/31 18:57:06 INFO: aec36adc501070cc became leader at term 2
	raft2023/05/31 18:57:06 INFO: raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2
	2023-05-31 18:57:06.351902 I | etcdserver: setting up the initial cluster version to 3.4
	2023-05-31 18:57:06.352188 I | etcdserver: published {Name:ingress-addon-legacy-546551 ClientURLs:[https://192.168.49.2:2379]} to cluster fa54960ea34d58be
	2023-05-31 18:57:06.358323 N | etcdserver/membership: set the initial cluster version to 3.4
	2023-05-31 18:57:06.367978 I | etcdserver/api: enabled capabilities for version 3.4
	2023-05-31 18:57:06.368058 I | embed: ready to serve client requests
	2023-05-31 18:57:06.369395 I | embed: serving client requests on 192.168.49.2:2379
	2023-05-31 18:57:06.439258 I | embed: ready to serve client requests
	2023-05-31 18:57:06.440599 I | embed: serving client requests on 127.0.0.1:2379
	
	* 
	* ==> kernel <==
	*  19:01:26 up 43 min,  0 users,  load average: 0.45, 1.00, 0.93
	Linux ingress-addon-legacy-546551 5.15.0-1037-aws #41~20.04.1-Ubuntu SMP Mon May 22 18:20:20 UTC 2023 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.2 LTS"
	
	* 
	* ==> kindnet [432f96e63003a715a35443ae4d524b5cbe1d5cb490eeba8a998ab89271494768] <==
	* I0531 18:59:22.256766       1 main.go:227] handling current node
	I0531 18:59:32.268205       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0531 18:59:32.268232       1 main.go:227] handling current node
	I0531 18:59:42.271724       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0531 18:59:42.271753       1 main.go:227] handling current node
	I0531 18:59:52.281695       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0531 18:59:52.281721       1 main.go:227] handling current node
	I0531 19:00:02.285459       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0531 19:00:02.285491       1 main.go:227] handling current node
	I0531 19:00:12.292276       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0531 19:00:12.292308       1 main.go:227] handling current node
	I0531 19:00:22.296309       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0531 19:00:22.296338       1 main.go:227] handling current node
	I0531 19:00:32.308457       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0531 19:00:32.308488       1 main.go:227] handling current node
	I0531 19:00:42.311611       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0531 19:00:42.311641       1 main.go:227] handling current node
	I0531 19:00:52.316813       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0531 19:00:52.316843       1 main.go:227] handling current node
	I0531 19:01:02.320776       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0531 19:01:02.320803       1 main.go:227] handling current node
	I0531 19:01:12.326828       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0531 19:01:12.326858       1 main.go:227] handling current node
	I0531 19:01:22.330982       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0531 19:01:22.331010       1 main.go:227] handling current node
	
	* 
	* ==> kube-apiserver [ed310a397b13cb5608026cae21cb73cbccede89ab7309ece800a9aaf1775b098] <==
	* I0531 18:57:10.863235       1 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	I0531 18:57:10.863362       1 dynamic_cafile_content.go:167] Starting request-header::/var/lib/minikube/certs/front-proxy-ca.crt
	I0531 18:57:10.952435       1 cache.go:39] Caches are synced for autoregister controller
	I0531 18:57:10.952797       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0531 18:57:10.952896       1 shared_informer.go:230] Caches are synced for crd-autoregister 
	I0531 18:57:10.976254       1 shared_informer.go:230] Caches are synced for cluster_authentication_trust_controller 
	I0531 18:57:10.976375       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0531 18:57:11.744010       1 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I0531 18:57:11.744041       1 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0531 18:57:11.759543       1 storage_scheduling.go:134] created PriorityClass system-node-critical with value 2000001000
	I0531 18:57:11.766946       1 storage_scheduling.go:134] created PriorityClass system-cluster-critical with value 2000000000
	I0531 18:57:11.766972       1 storage_scheduling.go:143] all system priority classes are created successfully or already exist.
	I0531 18:57:12.194066       1 controller.go:609] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0531 18:57:12.232964       1 controller.go:609] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W0531 18:57:12.355342       1 lease.go:224] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I0531 18:57:12.356456       1 controller.go:609] quota admission added evaluator for: endpoints
	I0531 18:57:12.360500       1 controller.go:609] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0531 18:57:13.212956       1 controller.go:609] quota admission added evaluator for: serviceaccounts
	I0531 18:57:14.103142       1 controller.go:609] quota admission added evaluator for: deployments.apps
	I0531 18:57:14.154911       1 controller.go:609] quota admission added evaluator for: daemonsets.apps
	I0531 18:57:17.427247       1 controller.go:609] quota admission added evaluator for: leases.coordination.k8s.io
	I0531 18:57:29.249831       1 controller.go:609] quota admission added evaluator for: replicasets.apps
	I0531 18:57:29.331560       1 controller.go:609] quota admission added evaluator for: controllerrevisions.apps
	I0531 18:58:20.681523       1 controller.go:609] quota admission added evaluator for: jobs.batch
	I0531 18:58:41.999507       1 controller.go:609] quota admission added evaluator for: ingresses.networking.k8s.io
	
	* 
	* ==> kube-controller-manager [d6e85a039460570798b8ce4500a28a698139dbe485d3bd33a4737ab30b136c67] <==
	* I0531 18:57:29.271664       1 shared_informer.go:230] Caches are synced for ReplicationController 
	I0531 18:57:29.277191       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-66bff467f8", UID:"a3eb794a-ce73-4bd1-94aa-ab1401c4e022", APIVersion:"apps/v1", ResourceVersion:"336", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: coredns-66bff467f8-v2f7s
	I0531 18:57:29.277426       1 shared_informer.go:230] Caches are synced for PVC protection 
	I0531 18:57:29.327324       1 shared_informer.go:230] Caches are synced for daemon sets 
	I0531 18:57:29.330088       1 shared_informer.go:230] Caches are synced for resource quota 
	I0531 18:57:29.330173       1 shared_informer.go:230] Caches are synced for garbage collector 
	I0531 18:57:29.338696       1 shared_informer.go:230] Caches are synced for stateful set 
	I0531 18:57:29.345279       1 event.go:278] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"kube-proxy", UID:"43a063fc-a305-455c-a4ec-78706bc464c9", APIVersion:"apps/v1", ResourceVersion:"217", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kube-proxy-wxshz
	I0531 18:57:29.348992       1 event.go:278] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"kindnet", UID:"1884019c-a7ec-4258-b665-b16adf4661b0", APIVersion:"apps/v1", ResourceVersion:"226", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kindnet-flrvc
	E0531 18:57:29.376193       1 daemon_controller.go:321] kube-system/kube-proxy failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-proxy", GenerateName:"", Namespace:"kube-system", SelfLink:"/apis/apps/v1/namespaces/kube-system/daemonsets/kube-proxy", UID:"43a063fc-a305-455c-a4ec-78706bc464c9", ResourceVersion:"217", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63821156234, loc:(*time.Location)(0x6307ca0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubeadm", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0x40017d8a40), FieldsType:"FieldsV1", FieldsV1:(*v1.Fields
V1)(0x40017d8a60)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0x40017d8a80), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"kube-proxy", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(n
il), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(0x40014ee680), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSou
rce)(0x40017d8aa0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.Pr
ojectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x40017d8ac0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolum
eSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kube-proxy", Image:"k8s.gcr.io/kube-proxy:v1.18.20", Command:[]string{"/usr/local/bin/kube-proxy", "--config=/var/lib/kube-proxy/config.conf", "--hostname-override=$(NODE_NAME)"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"NODE_NAME", Value:"", ValueFrom:(*v1.EnvVarSource)(0x40017d8b00)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList
(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-proxy", ReadOnly:false, MountPath:"/var/lib/kube-proxy", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0x4001348e10), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0x4000cbcd78), Acti
veDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string{"kubernetes.io/os":"linux"}, ServiceAccountName:"kube-proxy", DeprecatedServiceAccount:"kube-proxy", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0x40001d37a0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"CriticalAddonsOnly", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"system-node-critical", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPoli
cy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0x40000b3870)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0x4000cbcdc8)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:0, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kube-proxy": the object has been modified; please apply your changes to the latest version and try again
	I0531 18:57:29.378437       1 shared_informer.go:230] Caches are synced for resource quota 
	I0531 18:57:29.404839       1 shared_informer.go:230] Caches are synced for garbage collector 
	I0531 18:57:29.404886       1 garbagecollector.go:142] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0531 18:57:29.553193       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"coredns", UID:"af33dc95-449e-426f-bca4-841b656d5286", APIVersion:"apps/v1", ResourceVersion:"369", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set coredns-66bff467f8 to 1
	I0531 18:57:29.707636       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-66bff467f8", UID:"a3eb794a-ce73-4bd1-94aa-ab1401c4e022", APIVersion:"apps/v1", ResourceVersion:"370", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: coredns-66bff467f8-57qh7
	I0531 18:58:09.228612       1 node_lifecycle_controller.go:1226] Controller detected that some Nodes are Ready. Exiting master disruption mode.
	I0531 18:58:20.666922       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"ingress-nginx", Name:"ingress-nginx-controller", UID:"6191a0ee-870b-4026-9da7-663d925788ef", APIVersion:"apps/v1", ResourceVersion:"489", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set ingress-nginx-controller-7fcf777cb7 to 1
	I0531 18:58:20.691321       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7", UID:"d0970381-336e-4779-8b34-bf2a2b4d9433", APIVersion:"apps/v1", ResourceVersion:"490", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-controller-7fcf777cb7-8pk64
	I0531 18:58:20.740005       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"c3031a13-df7c-415e-8eda-16c97907b0db", APIVersion:"batch/v1", ResourceVersion:"494", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-create-tnzs8
	I0531 18:58:20.773235       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"1a20d83d-2338-4dad-bb8c-d331608284a6", APIVersion:"batch/v1", ResourceVersion:"502", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-patch-wqlfz
	I0531 18:58:23.891974       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"c3031a13-df7c-415e-8eda-16c97907b0db", APIVersion:"batch/v1", ResourceVersion:"507", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I0531 18:58:24.884977       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"1a20d83d-2338-4dad-bb8c-d331608284a6", APIVersion:"batch/v1", ResourceVersion:"513", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I0531 19:01:01.381015       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"default", Name:"hello-world-app", UID:"2ae3bd1e-ce39-4b6b-b26a-78b8fa7f0317", APIVersion:"apps/v1", ResourceVersion:"727", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set hello-world-app-5f5d8b66bb to 1
	I0531 19:01:01.399318       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"hello-world-app-5f5d8b66bb", UID:"3e15ad66-f18a-4b32-af9b-1fd46592ff8b", APIVersion:"apps/v1", ResourceVersion:"728", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: hello-world-app-5f5d8b66bb-4hq8x
	
	* 
	* ==> kube-proxy [62651d652bfa46afb255cf8a3387642cbdb2c7904c31f5e66089d603ceb1b94d] <==
	* W0531 18:57:31.969347       1 server_others.go:559] Unknown proxy mode "", assuming iptables proxy
	I0531 18:57:31.983045       1 node.go:136] Successfully retrieved node IP: 192.168.49.2
	I0531 18:57:31.983093       1 server_others.go:186] Using iptables Proxier.
	I0531 18:57:31.983466       1 server.go:583] Version: v1.18.20
	I0531 18:57:31.984596       1 config.go:315] Starting service config controller
	I0531 18:57:31.984719       1 shared_informer.go:223] Waiting for caches to sync for service config
	I0531 18:57:31.984861       1 config.go:133] Starting endpoints config controller
	I0531 18:57:31.984897       1 shared_informer.go:223] Waiting for caches to sync for endpoints config
	I0531 18:57:32.085051       1 shared_informer.go:230] Caches are synced for service config 
	I0531 18:57:32.085051       1 shared_informer.go:230] Caches are synced for endpoints config 
	
	* 
	* ==> kube-scheduler [8d0a96acd3df48450e3f3d4728754afdff24af06f9422bfa0e06fe79f9432e2c] <==
	* I0531 18:57:10.950416       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
	I0531 18:57:10.950527       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
	I0531 18:57:10.952550       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0531 18:57:10.952633       1 shared_informer.go:223] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0531 18:57:10.956084       1 secure_serving.go:178] Serving securely on 127.0.0.1:10259
	I0531 18:57:10.956183       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	E0531 18:57:10.967391       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0531 18:57:10.971202       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0531 18:57:10.971424       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0531 18:57:10.971568       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0531 18:57:10.971688       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0531 18:57:10.971817       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0531 18:57:10.971951       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0531 18:57:10.972057       1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0531 18:57:10.972179       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0531 18:57:10.972285       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0531 18:57:10.972406       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0531 18:57:10.972543       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0531 18:57:11.919081       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0531 18:57:12.031419       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0531 18:57:12.034914       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0531 18:57:12.050186       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0531 18:57:12.455057       1 shared_informer.go:230] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	E0531 18:57:29.298801       1 factory.go:503] pod: kube-system/coredns-66bff467f8-57qh7 is already present in the active queue
	E0531 18:57:30.262958       1 factory.go:503] pod: kube-system/storage-provisioner is already present in the active queue
	
	* 
	* ==> kubelet <==
	* May 31 19:01:05 ingress-addon-legacy-546551 kubelet[1621]: I0531 19:01:05.178633    1621 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: fc02ec35c3cae5401cceaab0ec2c389e79d6b5c6d45d37faecf8a06b05ef0450
	May 31 19:01:05 ingress-addon-legacy-546551 kubelet[1621]: E0531 19:01:05.178948    1621 pod_workers.go:191] Error syncing pod 26e046f2-afc2-4dc6-88ae-58433414beaf ("hello-world-app-5f5d8b66bb-4hq8x_default(26e046f2-afc2-4dc6-88ae-58433414beaf)"), skipping: failed to "StartContainer" for "hello-world-app" with CrashLoopBackOff: "back-off 10s restarting failed container=hello-world-app pod=hello-world-app-5f5d8b66bb-4hq8x_default(26e046f2-afc2-4dc6-88ae-58433414beaf)"
	May 31 19:01:06 ingress-addon-legacy-546551 kubelet[1621]: I0531 19:01:06.181089    1621 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: fc02ec35c3cae5401cceaab0ec2c389e79d6b5c6d45d37faecf8a06b05ef0450
	May 31 19:01:06 ingress-addon-legacy-546551 kubelet[1621]: E0531 19:01:06.181368    1621 pod_workers.go:191] Error syncing pod 26e046f2-afc2-4dc6-88ae-58433414beaf ("hello-world-app-5f5d8b66bb-4hq8x_default(26e046f2-afc2-4dc6-88ae-58433414beaf)"), skipping: failed to "StartContainer" for "hello-world-app" with CrashLoopBackOff: "back-off 10s restarting failed container=hello-world-app pod=hello-world-app-5f5d8b66bb-4hq8x_default(26e046f2-afc2-4dc6-88ae-58433414beaf)"
	May 31 19:01:16 ingress-addon-legacy-546551 kubelet[1621]: E0531 19:01:16.577798    1621 remote_image.go:87] ImageStatus "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" from image service failed: rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	May 31 19:01:16 ingress-addon-legacy-546551 kubelet[1621]: E0531 19:01:16.577840    1621 kuberuntime_image.go:85] ImageStatus for image {"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab"} failed: rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	May 31 19:01:16 ingress-addon-legacy-546551 kubelet[1621]: E0531 19:01:16.577881    1621 kuberuntime_manager.go:818] container start failed: ImageInspectError: Failed to inspect image "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab": rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	May 31 19:01:16 ingress-addon-legacy-546551 kubelet[1621]: E0531 19:01:16.577914    1621 pod_workers.go:191] Error syncing pod d77dacb1-5ebd-4caa-9739-f665dbb6d7d1 ("kube-ingress-dns-minikube_kube-system(d77dacb1-5ebd-4caa-9739-f665dbb6d7d1)"), skipping: failed to "StartContainer" for "minikube-ingress-dns" with ImageInspectError: "Failed to inspect image \"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\": rpc error: code = Unknown desc = short-name \"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\" did not resolve to an alias and no unqualified-search registries are defined in \"/etc/containers/registries.conf\""
	May 31 19:01:17 ingress-addon-legacy-546551 kubelet[1621]: I0531 19:01:17.263100    1621 reconciler.go:196] operationExecutor.UnmountVolume started for volume "minikube-ingress-dns-token-sp2m9" (UniqueName: "kubernetes.io/secret/d77dacb1-5ebd-4caa-9739-f665dbb6d7d1-minikube-ingress-dns-token-sp2m9") pod "d77dacb1-5ebd-4caa-9739-f665dbb6d7d1" (UID: "d77dacb1-5ebd-4caa-9739-f665dbb6d7d1")
	May 31 19:01:17 ingress-addon-legacy-546551 kubelet[1621]: I0531 19:01:17.269825    1621 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d77dacb1-5ebd-4caa-9739-f665dbb6d7d1-minikube-ingress-dns-token-sp2m9" (OuterVolumeSpecName: "minikube-ingress-dns-token-sp2m9") pod "d77dacb1-5ebd-4caa-9739-f665dbb6d7d1" (UID: "d77dacb1-5ebd-4caa-9739-f665dbb6d7d1"). InnerVolumeSpecName "minikube-ingress-dns-token-sp2m9". PluginName "kubernetes.io/secret", VolumeGidValue ""
	May 31 19:01:17 ingress-addon-legacy-546551 kubelet[1621]: I0531 19:01:17.363507    1621 reconciler.go:319] Volume detached for volume "minikube-ingress-dns-token-sp2m9" (UniqueName: "kubernetes.io/secret/d77dacb1-5ebd-4caa-9739-f665dbb6d7d1-minikube-ingress-dns-token-sp2m9") on node "ingress-addon-legacy-546551" DevicePath ""
	May 31 19:01:18 ingress-addon-legacy-546551 kubelet[1621]: W0531 19:01:18.198760    1621 pod_container_deletor.go:77] Container "df8d9b55b49d6a2cfa7936d99762082fbc3c6333f0b34609e5d413525242d296" not found in pod's containers
	May 31 19:01:18 ingress-addon-legacy-546551 kubelet[1621]: E0531 19:01:18.312126    1621 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-8pk64.17644f98d15dd927", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-8pk64", UID:"2847ccee-57c8-4113-b170-7bd9fe2494fa", APIVersion:"v1", ResourceVersion:"496", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-546551"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc116043f92746d27, ext:244342490774, loc:(*time.Location)(0x6a0ef20)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc116043f92746d27, ext:244342490774, loc:(*time.Location)(0x6a0ef20)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-8pk64.17644f98d15dd927" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	May 31 19:01:18 ingress-addon-legacy-546551 kubelet[1621]: E0531 19:01:18.327835    1621 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-8pk64.17644f98d15dd927", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-8pk64", UID:"2847ccee-57c8-4113-b170-7bd9fe2494fa", APIVersion:"v1", ResourceVersion:"496", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-546551"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc116043f92746d27, ext:244342490774, loc:(*time.Location)(0x6a0ef20)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc116043f9313b7f7, ext:244352930150, loc:(*time.Location)(0x6a0ef20)}}, Count:2, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-8pk64.17644f98d15dd927" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	May 31 19:01:21 ingress-addon-legacy-546551 kubelet[1621]: W0531 19:01:21.204033    1621 pod_container_deletor.go:77] Container "3b6a80e44fc77d158dc0738dc1515c3f540bfaf18f3705b9a86612c2b90728a0" not found in pod's containers
	May 31 19:01:21 ingress-addon-legacy-546551 kubelet[1621]: I0531 19:01:21.272673    1621 reconciler.go:196] operationExecutor.UnmountVolume started for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/2847ccee-57c8-4113-b170-7bd9fe2494fa-webhook-cert") pod "2847ccee-57c8-4113-b170-7bd9fe2494fa" (UID: "2847ccee-57c8-4113-b170-7bd9fe2494fa")
	May 31 19:01:21 ingress-addon-legacy-546551 kubelet[1621]: I0531 19:01:21.272757    1621 reconciler.go:196] operationExecutor.UnmountVolume started for volume "ingress-nginx-token-7sfsb" (UniqueName: "kubernetes.io/secret/2847ccee-57c8-4113-b170-7bd9fe2494fa-ingress-nginx-token-7sfsb") pod "2847ccee-57c8-4113-b170-7bd9fe2494fa" (UID: "2847ccee-57c8-4113-b170-7bd9fe2494fa")
	May 31 19:01:21 ingress-addon-legacy-546551 kubelet[1621]: I0531 19:01:21.278564    1621 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2847ccee-57c8-4113-b170-7bd9fe2494fa-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "2847ccee-57c8-4113-b170-7bd9fe2494fa" (UID: "2847ccee-57c8-4113-b170-7bd9fe2494fa"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	May 31 19:01:21 ingress-addon-legacy-546551 kubelet[1621]: I0531 19:01:21.279287    1621 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2847ccee-57c8-4113-b170-7bd9fe2494fa-ingress-nginx-token-7sfsb" (OuterVolumeSpecName: "ingress-nginx-token-7sfsb") pod "2847ccee-57c8-4113-b170-7bd9fe2494fa" (UID: "2847ccee-57c8-4113-b170-7bd9fe2494fa"). InnerVolumeSpecName "ingress-nginx-token-7sfsb". PluginName "kubernetes.io/secret", VolumeGidValue ""
	May 31 19:01:21 ingress-addon-legacy-546551 kubelet[1621]: I0531 19:01:21.373119    1621 reconciler.go:319] Volume detached for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/2847ccee-57c8-4113-b170-7bd9fe2494fa-webhook-cert") on node "ingress-addon-legacy-546551" DevicePath ""
	May 31 19:01:21 ingress-addon-legacy-546551 kubelet[1621]: I0531 19:01:21.373180    1621 reconciler.go:319] Volume detached for volume "ingress-nginx-token-7sfsb" (UniqueName: "kubernetes.io/secret/2847ccee-57c8-4113-b170-7bd9fe2494fa-ingress-nginx-token-7sfsb") on node "ingress-addon-legacy-546551" DevicePath ""
	May 31 19:01:21 ingress-addon-legacy-546551 kubelet[1621]: I0531 19:01:21.577133    1621 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: fc02ec35c3cae5401cceaab0ec2c389e79d6b5c6d45d37faecf8a06b05ef0450
	May 31 19:01:22 ingress-addon-legacy-546551 kubelet[1621]: I0531 19:01:22.207230    1621 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: fc02ec35c3cae5401cceaab0ec2c389e79d6b5c6d45d37faecf8a06b05ef0450
	May 31 19:01:22 ingress-addon-legacy-546551 kubelet[1621]: I0531 19:01:22.207489    1621 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 2480613c3044c1d3bca4dab208e811c8a58e2792726c0cc2248892c388451a32
	May 31 19:01:22 ingress-addon-legacy-546551 kubelet[1621]: E0531 19:01:22.207716    1621 pod_workers.go:191] Error syncing pod 26e046f2-afc2-4dc6-88ae-58433414beaf ("hello-world-app-5f5d8b66bb-4hq8x_default(26e046f2-afc2-4dc6-88ae-58433414beaf)"), skipping: failed to "StartContainer" for "hello-world-app" with CrashLoopBackOff: "back-off 20s restarting failed container=hello-world-app pod=hello-world-app-5f5d8b66bb-4hq8x_default(26e046f2-afc2-4dc6-88ae-58433414beaf)"
	
	* 
	* ==> storage-provisioner [12a1cb12a6a0ff405e3b9218c8d34f2f415ea950190058592d2fab713009f40a] <==
	* I0531 18:58:12.492133       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0531 18:58:12.507259       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0531 18:58:12.507474       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0531 18:58:12.546135       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0531 18:58:12.546403       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-546551_ec6e385e-c3aa-4537-84b2-be0ad9a75e0d!
	I0531 18:58:12.547002       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"da980944-daf9-4955-b380-5f8b7b600409", APIVersion:"v1", ResourceVersion:"439", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ingress-addon-legacy-546551_ec6e385e-c3aa-4537-84b2-be0ad9a75e0d became leader
	I0531 18:58:12.647920       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-546551_ec6e385e-c3aa-4537-84b2-be0ad9a75e0d!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p ingress-addon-legacy-546551 -n ingress-addon-legacy-546551
helpers_test.go:261: (dbg) Run:  kubectl --context ingress-addon-legacy-546551 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddons (175.50s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (4s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:552: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-025078 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:560: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-025078 -- exec busybox-67b7f59bb-9zwlk -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:571: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-025078 -- exec busybox-67b7f59bb-9zwlk -- sh -c "ping -c 1 192.168.58.1"
multinode_test.go:571: (dbg) Non-zero exit: out/minikube-linux-arm64 kubectl -p multinode-025078 -- exec busybox-67b7f59bb-9zwlk -- sh -c "ping -c 1 192.168.58.1": exit status 1 (238.440474ms)

                                                
                                                
-- stdout --
	PING 192.168.58.1 (192.168.58.1): 56 data bytes

                                                
                                                
-- /stdout --
** stderr ** 
	ping: permission denied (are you root?)
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:572: Failed to ping host (192.168.58.1) from pod (busybox-67b7f59bb-9zwlk): exit status 1
multinode_test.go:560: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-025078 -- exec busybox-67b7f59bb-fn4vn -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:571: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-025078 -- exec busybox-67b7f59bb-fn4vn -- sh -c "ping -c 1 192.168.58.1"
multinode_test.go:571: (dbg) Non-zero exit: out/minikube-linux-arm64 kubectl -p multinode-025078 -- exec busybox-67b7f59bb-fn4vn -- sh -c "ping -c 1 192.168.58.1": exit status 1 (240.92462ms)

                                                
                                                
-- stdout --
	PING 192.168.58.1 (192.168.58.1): 56 data bytes

                                                
                                                
-- /stdout --
** stderr ** 
	ping: permission denied (are you root?)
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:572: Failed to ping host (192.168.58.1) from pod (busybox-67b7f59bb-fn4vn): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-025078
helpers_test.go:235: (dbg) docker inspect multinode-025078:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "9d3cbea3acdb0b8d1b32dece59d31af9111f937e060f05b7083a59033f8e8705",
	        "Created": "2023-05-31T19:08:03.425679855Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 72364,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-05-31T19:08:03.750956973Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:dee4774de4f99268e16c76379a36a4607bed47635a069a7e60c17cd24d9aaa76",
	        "ResolvConfPath": "/var/lib/docker/containers/9d3cbea3acdb0b8d1b32dece59d31af9111f937e060f05b7083a59033f8e8705/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/9d3cbea3acdb0b8d1b32dece59d31af9111f937e060f05b7083a59033f8e8705/hostname",
	        "HostsPath": "/var/lib/docker/containers/9d3cbea3acdb0b8d1b32dece59d31af9111f937e060f05b7083a59033f8e8705/hosts",
	        "LogPath": "/var/lib/docker/containers/9d3cbea3acdb0b8d1b32dece59d31af9111f937e060f05b7083a59033f8e8705/9d3cbea3acdb0b8d1b32dece59d31af9111f937e060f05b7083a59033f8e8705-json.log",
	        "Name": "/multinode-025078",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "multinode-025078:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "multinode-025078",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/843471d5fd39947a4888b437e9a5860edc029529b77cbb4781137905bb47c6a3-init/diff:/var/lib/docker/overlay2/548bced7e749d102323bab71db162b075785f916e2a896d29f3adc2c3d7fbea8/diff",
	                "MergedDir": "/var/lib/docker/overlay2/843471d5fd39947a4888b437e9a5860edc029529b77cbb4781137905bb47c6a3/merged",
	                "UpperDir": "/var/lib/docker/overlay2/843471d5fd39947a4888b437e9a5860edc029529b77cbb4781137905bb47c6a3/diff",
	                "WorkDir": "/var/lib/docker/overlay2/843471d5fd39947a4888b437e9a5860edc029529b77cbb4781137905bb47c6a3/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "multinode-025078",
	                "Source": "/var/lib/docker/volumes/multinode-025078/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "multinode-025078",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "multinode-025078",
	                "name.minikube.sigs.k8s.io": "multinode-025078",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "851542f9daa4d07c4487c3882b449fe8f3876e594ac3aabda9778e5c59a498ce",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32847"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32846"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32843"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32845"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32844"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/851542f9daa4",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "multinode-025078": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.58.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "9d3cbea3acdb",
	                        "multinode-025078"
	                    ],
	                    "NetworkID": "36efdaa82add22f9c3211bfe6bb21dc2594617c450bcb54ef419204760e0689e",
	                    "EndpointID": "05e50f6c3482299d81d789784316e316b75c013ccf52b9f12b3f7babf8f4938f",
	                    "Gateway": "192.168.58.1",
	                    "IPAddress": "192.168.58.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:3a:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p multinode-025078 -n multinode-025078
helpers_test.go:244: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p multinode-025078 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p multinode-025078 logs -n 25: (1.531385245s)
helpers_test.go:252: TestMultiNode/serial/PingHostFrom2Pods logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                       Args                        |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -p mount-start-2-057691                           | mount-start-2-057691 | jenkins | v1.30.1 | 31 May 23 19:07 UTC | 31 May 23 19:07 UTC |
	|         | --memory=2048 --mount                             |                      |         |         |                     |                     |
	|         | --mount-gid 0 --mount-msize                       |                      |         |         |                     |                     |
	|         | 6543 --mount-port 46465                           |                      |         |         |                     |                     |
	|         | --mount-uid 0 --no-kubernetes                     |                      |         |         |                     |                     |
	|         | --driver=docker                                   |                      |         |         |                     |                     |
	|         | --container-runtime=crio                          |                      |         |         |                     |                     |
	| ssh     | mount-start-2-057691 ssh -- ls                    | mount-start-2-057691 | jenkins | v1.30.1 | 31 May 23 19:07 UTC | 31 May 23 19:07 UTC |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| delete  | -p mount-start-1-055738                           | mount-start-1-055738 | jenkins | v1.30.1 | 31 May 23 19:07 UTC | 31 May 23 19:07 UTC |
	|         | --alsologtostderr -v=5                            |                      |         |         |                     |                     |
	| ssh     | mount-start-2-057691 ssh -- ls                    | mount-start-2-057691 | jenkins | v1.30.1 | 31 May 23 19:07 UTC | 31 May 23 19:07 UTC |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| stop    | -p mount-start-2-057691                           | mount-start-2-057691 | jenkins | v1.30.1 | 31 May 23 19:07 UTC | 31 May 23 19:07 UTC |
	| start   | -p mount-start-2-057691                           | mount-start-2-057691 | jenkins | v1.30.1 | 31 May 23 19:07 UTC | 31 May 23 19:07 UTC |
	| ssh     | mount-start-2-057691 ssh -- ls                    | mount-start-2-057691 | jenkins | v1.30.1 | 31 May 23 19:07 UTC | 31 May 23 19:07 UTC |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| delete  | -p mount-start-2-057691                           | mount-start-2-057691 | jenkins | v1.30.1 | 31 May 23 19:07 UTC | 31 May 23 19:07 UTC |
	| delete  | -p mount-start-1-055738                           | mount-start-1-055738 | jenkins | v1.30.1 | 31 May 23 19:07 UTC | 31 May 23 19:07 UTC |
	| start   | -p multinode-025078                               | multinode-025078     | jenkins | v1.30.1 | 31 May 23 19:07 UTC | 31 May 23 19:09 UTC |
	|         | --wait=true --memory=2200                         |                      |         |         |                     |                     |
	|         | --nodes=2 -v=8                                    |                      |         |         |                     |                     |
	|         | --alsologtostderr                                 |                      |         |         |                     |                     |
	|         | --driver=docker                                   |                      |         |         |                     |                     |
	|         | --container-runtime=crio                          |                      |         |         |                     |                     |
	| kubectl | -p multinode-025078 -- apply -f                   | multinode-025078     | jenkins | v1.30.1 | 31 May 23 19:09 UTC | 31 May 23 19:09 UTC |
	|         | ./testdata/multinodes/multinode-pod-dns-test.yaml |                      |         |         |                     |                     |
	| kubectl | -p multinode-025078 -- rollout                    | multinode-025078     | jenkins | v1.30.1 | 31 May 23 19:09 UTC | 31 May 23 19:09 UTC |
	|         | status deployment/busybox                         |                      |         |         |                     |                     |
	| kubectl | -p multinode-025078 -- get pods -o                | multinode-025078     | jenkins | v1.30.1 | 31 May 23 19:09 UTC | 31 May 23 19:09 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |         |         |                     |                     |
	| kubectl | -p multinode-025078 -- get pods -o                | multinode-025078     | jenkins | v1.30.1 | 31 May 23 19:09 UTC | 31 May 23 19:09 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |         |         |                     |                     |
	| kubectl | -p multinode-025078 -- exec                       | multinode-025078     | jenkins | v1.30.1 | 31 May 23 19:09 UTC | 31 May 23 19:09 UTC |
	|         | busybox-67b7f59bb-9zwlk --                        |                      |         |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |         |         |                     |                     |
	| kubectl | -p multinode-025078 -- exec                       | multinode-025078     | jenkins | v1.30.1 | 31 May 23 19:09 UTC | 31 May 23 19:09 UTC |
	|         | busybox-67b7f59bb-fn4vn --                        |                      |         |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |         |         |                     |                     |
	| kubectl | -p multinode-025078 -- exec                       | multinode-025078     | jenkins | v1.30.1 | 31 May 23 19:09 UTC | 31 May 23 19:09 UTC |
	|         | busybox-67b7f59bb-9zwlk --                        |                      |         |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |         |         |                     |                     |
	| kubectl | -p multinode-025078 -- exec                       | multinode-025078     | jenkins | v1.30.1 | 31 May 23 19:09 UTC | 31 May 23 19:09 UTC |
	|         | busybox-67b7f59bb-fn4vn --                        |                      |         |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |         |         |                     |                     |
	| kubectl | -p multinode-025078 -- exec                       | multinode-025078     | jenkins | v1.30.1 | 31 May 23 19:09 UTC | 31 May 23 19:09 UTC |
	|         | busybox-67b7f59bb-9zwlk -- nslookup               |                      |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |         |         |                     |                     |
	| kubectl | -p multinode-025078 -- exec                       | multinode-025078     | jenkins | v1.30.1 | 31 May 23 19:09 UTC | 31 May 23 19:09 UTC |
	|         | busybox-67b7f59bb-fn4vn -- nslookup               |                      |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |         |         |                     |                     |
	| kubectl | -p multinode-025078 -- get pods -o                | multinode-025078     | jenkins | v1.30.1 | 31 May 23 19:09 UTC | 31 May 23 19:09 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |         |         |                     |                     |
	| kubectl | -p multinode-025078 -- exec                       | multinode-025078     | jenkins | v1.30.1 | 31 May 23 19:09 UTC | 31 May 23 19:09 UTC |
	|         | busybox-67b7f59bb-9zwlk                           |                      |         |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |         |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |         |         |                     |                     |
	| kubectl | -p multinode-025078 -- exec                       | multinode-025078     | jenkins | v1.30.1 | 31 May 23 19:09 UTC |                     |
	|         | busybox-67b7f59bb-9zwlk -- sh                     |                      |         |         |                     |                     |
	|         | -c ping -c 1 192.168.58.1                         |                      |         |         |                     |                     |
	| kubectl | -p multinode-025078 -- exec                       | multinode-025078     | jenkins | v1.30.1 | 31 May 23 19:09 UTC | 31 May 23 19:09 UTC |
	|         | busybox-67b7f59bb-fn4vn                           |                      |         |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |         |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |         |         |                     |                     |
	| kubectl | -p multinode-025078 -- exec                       | multinode-025078     | jenkins | v1.30.1 | 31 May 23 19:09 UTC |                     |
	|         | busybox-67b7f59bb-fn4vn -- sh                     |                      |         |         |                     |                     |
	|         | -c ping -c 1 192.168.58.1                         |                      |         |         |                     |                     |
	|---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/05/31 19:07:58
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.20.4 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0531 19:07:58.118813   71907 out.go:296] Setting OutFile to fd 1 ...
	I0531 19:07:58.118933   71907 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0531 19:07:58.118941   71907 out.go:309] Setting ErrFile to fd 2...
	I0531 19:07:58.118946   71907 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0531 19:07:58.119120   71907 root.go:336] Updating PATH: /home/jenkins/minikube-integration/16569-2389/.minikube/bin
	I0531 19:07:58.119526   71907 out.go:303] Setting JSON to false
	I0531 19:07:58.120405   71907 start.go:127] hostinfo: {"hostname":"ip-172-31-21-244","uptime":3023,"bootTime":1685557055,"procs":299,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1037-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0531 19:07:58.120466   71907 start.go:137] virtualization:  
	I0531 19:07:58.122596   71907 out.go:177] * [multinode-025078] minikube v1.30.1 on Ubuntu 20.04 (arm64)
	I0531 19:07:58.124788   71907 out.go:177]   - MINIKUBE_LOCATION=16569
	I0531 19:07:58.126996   71907 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0531 19:07:58.124863   71907 notify.go:220] Checking for updates...
	I0531 19:07:58.129968   71907 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/16569-2389/kubeconfig
	I0531 19:07:58.131989   71907 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/16569-2389/.minikube
	I0531 19:07:58.133368   71907 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0531 19:07:58.135158   71907 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0531 19:07:58.137179   71907 driver.go:375] Setting default libvirt URI to qemu:///system
	I0531 19:07:58.159831   71907 docker.go:121] docker version: linux-24.0.2:Docker Engine - Community
	I0531 19:07:58.159924   71907 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0531 19:07:58.239479   71907 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:35 SystemTime:2023-05-31 19:07:58.229709024 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1037-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215166976 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.5] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.18.1]] Warnings:<nil>}}
	I0531 19:07:58.239578   71907 docker.go:294] overlay module found
	I0531 19:07:58.241873   71907 out.go:177] * Using the docker driver based on user configuration
	I0531 19:07:58.243413   71907 start.go:297] selected driver: docker
	I0531 19:07:58.243429   71907 start.go:875] validating driver "docker" against <nil>
	I0531 19:07:58.243442   71907 start.go:886] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0531 19:07:58.244046   71907 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0531 19:07:58.311184   71907 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:35 SystemTime:2023-05-31 19:07:58.301906307 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1037-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215166976 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.5] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.18.1]] Warnings:<nil>}}
	I0531 19:07:58.311335   71907 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0531 19:07:58.311548   71907 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0531 19:07:58.313271   71907 out.go:177] * Using Docker driver with root privileges
	I0531 19:07:58.314783   71907 cni.go:84] Creating CNI manager for ""
	I0531 19:07:58.314796   71907 cni.go:136] 0 nodes found, recommending kindnet
	I0531 19:07:58.314804   71907 start_flags.go:314] Found "CNI" CNI - setting NetworkPlugin=cni
	I0531 19:07:58.314830   71907 start_flags.go:319] config:
	{Name:multinode-025078 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:multinode-025078 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:cr
io CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0531 19:07:58.316518   71907 out.go:177] * Starting control plane node multinode-025078 in cluster multinode-025078
	I0531 19:07:58.318194   71907 cache.go:122] Beginning downloading kic base image for docker with crio
	I0531 19:07:58.319771   71907 out.go:177] * Pulling base image ...
	I0531 19:07:58.321417   71907 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime crio
	I0531 19:07:58.321464   71907 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 in local docker daemon
	I0531 19:07:58.321480   71907 preload.go:148] Found local preload: /home/jenkins/minikube-integration/16569-2389/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-cri-o-overlay-arm64.tar.lz4
	I0531 19:07:58.321491   71907 cache.go:57] Caching tarball of preloaded images
	I0531 19:07:58.321556   71907 preload.go:174] Found /home/jenkins/minikube-integration/16569-2389/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0531 19:07:58.321565   71907 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.2 on crio
	I0531 19:07:58.321953   71907 profile.go:148] Saving config to /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/multinode-025078/config.json ...
	I0531 19:07:58.321983   71907 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/multinode-025078/config.json: {Name:mkaa13a061119f9cab2d9597bdf8db76f855114b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 19:07:58.338441   71907 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 in local docker daemon, skipping pull
	I0531 19:07:58.338465   71907 cache.go:145] gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 exists in daemon, skipping load
	I0531 19:07:58.338485   71907 cache.go:195] Successfully downloaded all kic artifacts
	I0531 19:07:58.338531   71907 start.go:364] acquiring machines lock for multinode-025078: {Name:mk60c1a49f8930e81d780a8ecadbaa79ad5f9170 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0531 19:07:58.338649   71907 start.go:368] acquired machines lock for "multinode-025078" in 100.381µs
	I0531 19:07:58.338674   71907 start.go:93] Provisioning new machine with config: &{Name:multinode-025078 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:multinode-025078 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false D
isableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0531 19:07:58.338782   71907 start.go:125] createHost starting for "" (driver="docker")
	I0531 19:07:58.340699   71907 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0531 19:07:58.340957   71907 start.go:159] libmachine.API.Create for "multinode-025078" (driver="docker")
	I0531 19:07:58.340980   71907 client.go:168] LocalClient.Create starting
	I0531 19:07:58.341065   71907 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/16569-2389/.minikube/certs/ca.pem
	I0531 19:07:58.341099   71907 main.go:141] libmachine: Decoding PEM data...
	I0531 19:07:58.341119   71907 main.go:141] libmachine: Parsing certificate...
	I0531 19:07:58.341173   71907 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/16569-2389/.minikube/certs/cert.pem
	I0531 19:07:58.341189   71907 main.go:141] libmachine: Decoding PEM data...
	I0531 19:07:58.341199   71907 main.go:141] libmachine: Parsing certificate...
	I0531 19:07:58.341580   71907 cli_runner.go:164] Run: docker network inspect multinode-025078 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0531 19:07:58.358582   71907 cli_runner.go:211] docker network inspect multinode-025078 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0531 19:07:58.358662   71907 network_create.go:281] running [docker network inspect multinode-025078] to gather additional debugging logs...
	I0531 19:07:58.358678   71907 cli_runner.go:164] Run: docker network inspect multinode-025078
	W0531 19:07:58.375413   71907 cli_runner.go:211] docker network inspect multinode-025078 returned with exit code 1
	I0531 19:07:58.375443   71907 network_create.go:284] error running [docker network inspect multinode-025078]: docker network inspect multinode-025078: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network multinode-025078 not found
	I0531 19:07:58.375468   71907 network_create.go:286] output of [docker network inspect multinode-025078]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network multinode-025078 not found
	
	** /stderr **
	I0531 19:07:58.375536   71907 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0531 19:07:58.393265   71907 network.go:214] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-84359259bfe9 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:f9:de:3f:c7} reservation:<nil>}
	I0531 19:07:58.393578   71907 network.go:209] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40010efbd0}
	I0531 19:07:58.393600   71907 network_create.go:123] attempt to create docker network multinode-025078 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0531 19:07:58.393655   71907 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-025078 multinode-025078
	I0531 19:07:58.471397   71907 network_create.go:107] docker network multinode-025078 192.168.58.0/24 created
	I0531 19:07:58.471432   71907 kic.go:117] calculated static IP "192.168.58.2" for the "multinode-025078" container
	I0531 19:07:58.471524   71907 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0531 19:07:58.488460   71907 cli_runner.go:164] Run: docker volume create multinode-025078 --label name.minikube.sigs.k8s.io=multinode-025078 --label created_by.minikube.sigs.k8s.io=true
	I0531 19:07:58.506420   71907 oci.go:103] Successfully created a docker volume multinode-025078
	I0531 19:07:58.506507   71907 cli_runner.go:164] Run: docker run --rm --name multinode-025078-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-025078 --entrypoint /usr/bin/test -v multinode-025078:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 -d /var/lib
	I0531 19:07:59.084797   71907 oci.go:107] Successfully prepared a docker volume multinode-025078
	I0531 19:07:59.084837   71907 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime crio
	I0531 19:07:59.084856   71907 kic.go:190] Starting extracting preloaded images to volume ...
	I0531 19:07:59.084941   71907 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/16569-2389/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v multinode-025078:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 -I lz4 -xf /preloaded.tar -C /extractDir
	I0531 19:08:03.338811   71907 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/16569-2389/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v multinode-025078:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 -I lz4 -xf /preloaded.tar -C /extractDir: (4.253833382s)
	I0531 19:08:03.338856   71907 kic.go:199] duration metric: took 4.253996 seconds to extract preloaded images to volume
	W0531 19:08:03.339009   71907 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0531 19:08:03.339125   71907 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0531 19:08:03.408227   71907 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname multinode-025078 --name multinode-025078 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-025078 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=multinode-025078 --network multinode-025078 --ip 192.168.58.2 --volume multinode-025078:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8
	I0531 19:08:03.759721   71907 cli_runner.go:164] Run: docker container inspect multinode-025078 --format={{.State.Running}}
	I0531 19:08:03.786387   71907 cli_runner.go:164] Run: docker container inspect multinode-025078 --format={{.State.Status}}
	I0531 19:08:03.819098   71907 cli_runner.go:164] Run: docker exec multinode-025078 stat /var/lib/dpkg/alternatives/iptables
	I0531 19:08:03.885985   71907 oci.go:144] the created container "multinode-025078" has a running status.
	I0531 19:08:03.886007   71907 kic.go:221] Creating ssh key for kic: /home/jenkins/minikube-integration/16569-2389/.minikube/machines/multinode-025078/id_rsa...
	I0531 19:08:05.168175   71907 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16569-2389/.minikube/machines/multinode-025078/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0531 19:08:05.168297   71907 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/16569-2389/.minikube/machines/multinode-025078/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0531 19:08:05.190939   71907 cli_runner.go:164] Run: docker container inspect multinode-025078 --format={{.State.Status}}
	I0531 19:08:05.208275   71907 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0531 19:08:05.208299   71907 kic_runner.go:114] Args: [docker exec --privileged multinode-025078 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0531 19:08:05.264797   71907 cli_runner.go:164] Run: docker container inspect multinode-025078 --format={{.State.Status}}
	I0531 19:08:05.282959   71907 machine.go:88] provisioning docker machine ...
	I0531 19:08:05.282990   71907 ubuntu.go:169] provisioning hostname "multinode-025078"
	I0531 19:08:05.283059   71907 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-025078
	I0531 19:08:05.300725   71907 main.go:141] libmachine: Using SSH client type: native
	I0531 19:08:05.301185   71907 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x39eaa0] 0x3a1430 <nil>  [] 0s} 127.0.0.1 32847 <nil> <nil>}
	I0531 19:08:05.301205   71907 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-025078 && echo "multinode-025078" | sudo tee /etc/hostname
	I0531 19:08:05.441898   71907 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-025078
	
	I0531 19:08:05.441973   71907 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-025078
	I0531 19:08:05.461842   71907 main.go:141] libmachine: Using SSH client type: native
	I0531 19:08:05.462276   71907 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x39eaa0] 0x3a1430 <nil>  [] 0s} 127.0.0.1 32847 <nil> <nil>}
	I0531 19:08:05.462297   71907 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-025078' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-025078/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-025078' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0531 19:08:05.587896   71907 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0531 19:08:05.587922   71907 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/16569-2389/.minikube CaCertPath:/home/jenkins/minikube-integration/16569-2389/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16569-2389/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16569-2389/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16569-2389/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16569-2389/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16569-2389/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16569-2389/.minikube}
	I0531 19:08:05.587946   71907 ubuntu.go:177] setting up certificates
	I0531 19:08:05.587954   71907 provision.go:83] configureAuth start
	I0531 19:08:05.588015   71907 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-025078
	I0531 19:08:05.606391   71907 provision.go:138] copyHostCerts
	I0531 19:08:05.606429   71907 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16569-2389/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/16569-2389/.minikube/ca.pem
	I0531 19:08:05.606458   71907 exec_runner.go:144] found /home/jenkins/minikube-integration/16569-2389/.minikube/ca.pem, removing ...
	I0531 19:08:05.606471   71907 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16569-2389/.minikube/ca.pem
	I0531 19:08:05.606547   71907 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16569-2389/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16569-2389/.minikube/ca.pem (1078 bytes)
	I0531 19:08:05.606639   71907 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16569-2389/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/16569-2389/.minikube/cert.pem
	I0531 19:08:05.606662   71907 exec_runner.go:144] found /home/jenkins/minikube-integration/16569-2389/.minikube/cert.pem, removing ...
	I0531 19:08:05.606670   71907 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16569-2389/.minikube/cert.pem
	I0531 19:08:05.606699   71907 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16569-2389/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16569-2389/.minikube/cert.pem (1123 bytes)
	I0531 19:08:05.606923   71907 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16569-2389/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/16569-2389/.minikube/key.pem
	I0531 19:08:05.606946   71907 exec_runner.go:144] found /home/jenkins/minikube-integration/16569-2389/.minikube/key.pem, removing ...
	I0531 19:08:05.606951   71907 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16569-2389/.minikube/key.pem
	I0531 19:08:05.606980   71907 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16569-2389/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16569-2389/.minikube/key.pem (1679 bytes)
	I0531 19:08:05.607045   71907 provision.go:112] generating server cert: /home/jenkins/minikube-integration/16569-2389/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16569-2389/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16569-2389/.minikube/certs/ca-key.pem org=jenkins.multinode-025078 san=[192.168.58.2 127.0.0.1 localhost 127.0.0.1 minikube multinode-025078]
	I0531 19:08:06.354759   71907 provision.go:172] copyRemoteCerts
	I0531 19:08:06.354823   71907 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0531 19:08:06.354873   71907 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-025078
	I0531 19:08:06.372664   71907 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32847 SSHKeyPath:/home/jenkins/minikube-integration/16569-2389/.minikube/machines/multinode-025078/id_rsa Username:docker}
	I0531 19:08:06.465335   71907 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16569-2389/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0531 19:08:06.465392   71907 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16569-2389/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0531 19:08:06.493864   71907 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16569-2389/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0531 19:08:06.493981   71907 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16569-2389/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0531 19:08:06.523553   71907 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16569-2389/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0531 19:08:06.523637   71907 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16569-2389/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0531 19:08:06.552820   71907 provision.go:86] duration metric: configureAuth took 964.850514ms
	I0531 19:08:06.552887   71907 ubuntu.go:193] setting minikube options for container-runtime
	I0531 19:08:06.553081   71907 config.go:182] Loaded profile config "multinode-025078": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.2
	I0531 19:08:06.553190   71907 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-025078
	I0531 19:08:06.572068   71907 main.go:141] libmachine: Using SSH client type: native
	I0531 19:08:06.572524   71907 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x39eaa0] 0x3a1430 <nil>  [] 0s} 127.0.0.1 32847 <nil> <nil>}
	I0531 19:08:06.572549   71907 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0531 19:08:06.813535   71907 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0531 19:08:06.813558   71907 machine.go:91] provisioned docker machine in 1.530579048s
	I0531 19:08:06.813568   71907 client.go:171] LocalClient.Create took 8.472582621s
	I0531 19:08:06.813580   71907 start.go:167] duration metric: libmachine.API.Create for "multinode-025078" took 8.472622653s
	I0531 19:08:06.813587   71907 start.go:300] post-start starting for "multinode-025078" (driver="docker")
	I0531 19:08:06.813593   71907 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0531 19:08:06.813661   71907 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0531 19:08:06.813708   71907 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-025078
	I0531 19:08:06.832713   71907 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32847 SSHKeyPath:/home/jenkins/minikube-integration/16569-2389/.minikube/machines/multinode-025078/id_rsa Username:docker}
	I0531 19:08:06.925632   71907 ssh_runner.go:195] Run: cat /etc/os-release
	I0531 19:08:06.929679   71907 command_runner.go:130] > PRETTY_NAME="Ubuntu 22.04.2 LTS"
	I0531 19:08:06.929700   71907 command_runner.go:130] > NAME="Ubuntu"
	I0531 19:08:06.929708   71907 command_runner.go:130] > VERSION_ID="22.04"
	I0531 19:08:06.929714   71907 command_runner.go:130] > VERSION="22.04.2 LTS (Jammy Jellyfish)"
	I0531 19:08:06.929727   71907 command_runner.go:130] > VERSION_CODENAME=jammy
	I0531 19:08:06.929731   71907 command_runner.go:130] > ID=ubuntu
	I0531 19:08:06.929736   71907 command_runner.go:130] > ID_LIKE=debian
	I0531 19:08:06.929742   71907 command_runner.go:130] > HOME_URL="https://www.ubuntu.com/"
	I0531 19:08:06.929748   71907 command_runner.go:130] > SUPPORT_URL="https://help.ubuntu.com/"
	I0531 19:08:06.929755   71907 command_runner.go:130] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
	I0531 19:08:06.929764   71907 command_runner.go:130] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
	I0531 19:08:06.929773   71907 command_runner.go:130] > UBUNTU_CODENAME=jammy
	I0531 19:08:06.929820   71907 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0531 19:08:06.929851   71907 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0531 19:08:06.929865   71907 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0531 19:08:06.929874   71907 info.go:137] Remote host: Ubuntu 22.04.2 LTS
	I0531 19:08:06.929890   71907 filesync.go:126] Scanning /home/jenkins/minikube-integration/16569-2389/.minikube/addons for local assets ...
	I0531 19:08:06.929958   71907 filesync.go:126] Scanning /home/jenkins/minikube-integration/16569-2389/.minikube/files for local assets ...
	I0531 19:08:06.930040   71907 filesync.go:149] local asset: /home/jenkins/minikube-integration/16569-2389/.minikube/files/etc/ssl/certs/78042.pem -> 78042.pem in /etc/ssl/certs
	I0531 19:08:06.930051   71907 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16569-2389/.minikube/files/etc/ssl/certs/78042.pem -> /etc/ssl/certs/78042.pem
	I0531 19:08:06.930152   71907 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0531 19:08:06.940564   71907 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16569-2389/.minikube/files/etc/ssl/certs/78042.pem --> /etc/ssl/certs/78042.pem (1708 bytes)
	I0531 19:08:06.969336   71907 start.go:303] post-start completed in 155.735605ms
	I0531 19:08:06.969716   71907 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-025078
	I0531 19:08:06.987910   71907 profile.go:148] Saving config to /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/multinode-025078/config.json ...
	I0531 19:08:06.988196   71907 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0531 19:08:06.988245   71907 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-025078
	I0531 19:08:07.009351   71907 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32847 SSHKeyPath:/home/jenkins/minikube-integration/16569-2389/.minikube/machines/multinode-025078/id_rsa Username:docker}
	I0531 19:08:07.101028   71907 command_runner.go:130] > 10%!
	(MISSING)I0531 19:08:07.101103   71907 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0531 19:08:07.106663   71907 command_runner.go:130] > 176G
	I0531 19:08:07.107010   71907 start.go:128] duration metric: createHost completed in 8.76821606s
	I0531 19:08:07.107030   71907 start.go:83] releasing machines lock for "multinode-025078", held for 8.768372293s
	I0531 19:08:07.107108   71907 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-025078
	I0531 19:08:07.124771   71907 ssh_runner.go:195] Run: cat /version.json
	I0531 19:08:07.124782   71907 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0531 19:08:07.124824   71907 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-025078
	I0531 19:08:07.124840   71907 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-025078
	I0531 19:08:07.144421   71907 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32847 SSHKeyPath:/home/jenkins/minikube-integration/16569-2389/.minikube/machines/multinode-025078/id_rsa Username:docker}
	I0531 19:08:07.158068   71907 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32847 SSHKeyPath:/home/jenkins/minikube-integration/16569-2389/.minikube/machines/multinode-025078/id_rsa Username:docker}
	I0531 19:08:07.381562   71907 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0531 19:08:07.381633   71907 command_runner.go:130] > {"iso_version": "v1.30.1-1684885329-16572", "kicbase_version": "v0.0.39-1685034446-16582", "minikube_version": "v1.30.1", "commit": "9bed7441264a4ae8022c57b970940d4a22d9373a"}
	I0531 19:08:07.381767   71907 ssh_runner.go:195] Run: systemctl --version
	I0531 19:08:07.386989   71907 command_runner.go:130] > systemd 249 (249.11-0ubuntu3.9)
	I0531 19:08:07.387024   71907 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I0531 19:08:07.387406   71907 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0531 19:08:07.534919   71907 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0531 19:08:07.540272   71907 command_runner.go:130] >   File: /etc/cni/net.d/200-loopback.conf
	I0531 19:08:07.540339   71907 command_runner.go:130] >   Size: 54        	Blocks: 8          IO Block: 4096   regular file
	I0531 19:08:07.540356   71907 command_runner.go:130] > Device: 36h/54d	Inode: 1302367     Links: 1
	I0531 19:08:07.540365   71907 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0531 19:08:07.540372   71907 command_runner.go:130] > Access: 2023-04-04 14:31:21.000000000 +0000
	I0531 19:08:07.540379   71907 command_runner.go:130] > Modify: 2023-04-04 14:31:21.000000000 +0000
	I0531 19:08:07.540389   71907 command_runner.go:130] > Change: 2023-05-31 18:44:35.221132349 +0000
	I0531 19:08:07.540395   71907 command_runner.go:130] >  Birth: 2023-05-31 18:44:35.221132349 +0000
	I0531 19:08:07.540715   71907 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0531 19:08:07.565348   71907 cni.go:220] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0531 19:08:07.565422   71907 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0531 19:08:07.601312   71907 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf, 
	I0531 19:08:07.601339   71907 cni.go:261] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0531 19:08:07.601347   71907 start.go:481] detecting cgroup driver to use...
	I0531 19:08:07.601375   71907 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0531 19:08:07.601438   71907 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0531 19:08:07.619294   71907 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0531 19:08:07.632800   71907 docker.go:193] disabling cri-docker service (if available) ...
	I0531 19:08:07.632879   71907 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0531 19:08:07.649228   71907 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0531 19:08:07.665927   71907 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0531 19:08:07.764462   71907 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0531 19:08:07.871319   71907 command_runner.go:130] ! Created symlink /etc/systemd/system/cri-docker.service → /dev/null.
	I0531 19:08:07.871357   71907 docker.go:209] disabling docker service ...
	I0531 19:08:07.871420   71907 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0531 19:08:07.893171   71907 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0531 19:08:07.907356   71907 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0531 19:08:08.006313   71907 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/docker.socket.
	I0531 19:08:08.006431   71907 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0531 19:08:08.020401   71907 command_runner.go:130] ! Created symlink /etc/systemd/system/docker.service → /dev/null.
	I0531 19:08:08.114533   71907 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0531 19:08:08.128824   71907 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0531 19:08:08.148159   71907 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0531 19:08:08.149616   71907 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0531 19:08:08.149678   71907 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0531 19:08:08.162078   71907 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0531 19:08:08.162146   71907 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0531 19:08:08.174411   71907 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0531 19:08:08.187673   71907 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0531 19:08:08.200734   71907 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0531 19:08:08.212393   71907 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0531 19:08:08.221491   71907 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0531 19:08:08.222675   71907 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0531 19:08:08.233129   71907 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0531 19:08:08.331652   71907 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0531 19:08:08.452787   71907 start.go:528] Will wait 60s for socket path /var/run/crio/crio.sock
	I0531 19:08:08.452893   71907 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0531 19:08:08.457760   71907 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0531 19:08:08.457833   71907 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0531 19:08:08.457854   71907 command_runner.go:130] > Device: 43h/67d	Inode: 186         Links: 1
	I0531 19:08:08.457875   71907 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0531 19:08:08.457907   71907 command_runner.go:130] > Access: 2023-05-31 19:08:08.435721489 +0000
	I0531 19:08:08.457938   71907 command_runner.go:130] > Modify: 2023-05-31 19:08:08.435721489 +0000
	I0531 19:08:08.457962   71907 command_runner.go:130] > Change: 2023-05-31 19:08:08.435721489 +0000
	I0531 19:08:08.457991   71907 command_runner.go:130] >  Birth: -
	I0531 19:08:08.458347   71907 start.go:549] Will wait 60s for crictl version
	I0531 19:08:08.458406   71907 ssh_runner.go:195] Run: which crictl
	I0531 19:08:08.463108   71907 command_runner.go:130] > /usr/bin/crictl
	I0531 19:08:08.463239   71907 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0531 19:08:08.501777   71907 command_runner.go:130] > Version:  0.1.0
	I0531 19:08:08.501841   71907 command_runner.go:130] > RuntimeName:  cri-o
	I0531 19:08:08.501871   71907 command_runner.go:130] > RuntimeVersion:  1.24.5
	I0531 19:08:08.501892   71907 command_runner.go:130] > RuntimeApiVersion:  v1
	I0531 19:08:08.504798   71907 start.go:565] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.5
	RuntimeApiVersion:  v1
	I0531 19:08:08.504880   71907 ssh_runner.go:195] Run: crio --version
	I0531 19:08:08.549054   71907 command_runner.go:130] > crio version 1.24.5
	I0531 19:08:08.549073   71907 command_runner.go:130] > Version:          1.24.5
	I0531 19:08:08.549084   71907 command_runner.go:130] > GitCommit:        b007cb6753d97de6218787b6894b0e3cc1dc8ecd
	I0531 19:08:08.549090   71907 command_runner.go:130] > GitTreeState:     clean
	I0531 19:08:08.549105   71907 command_runner.go:130] > BuildDate:        2023-04-04T14:31:22Z
	I0531 19:08:08.549119   71907 command_runner.go:130] > GoVersion:        go1.18.2
	I0531 19:08:08.549127   71907 command_runner.go:130] > Compiler:         gc
	I0531 19:08:08.549135   71907 command_runner.go:130] > Platform:         linux/arm64
	I0531 19:08:08.549142   71907 command_runner.go:130] > Linkmode:         dynamic
	I0531 19:08:08.549155   71907 command_runner.go:130] > BuildTags:        apparmor, exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0531 19:08:08.549161   71907 command_runner.go:130] > SeccompEnabled:   true
	I0531 19:08:08.549176   71907 command_runner.go:130] > AppArmorEnabled:  false
	I0531 19:08:08.551234   71907 ssh_runner.go:195] Run: crio --version
	I0531 19:08:08.594265   71907 command_runner.go:130] > crio version 1.24.5
	I0531 19:08:08.594285   71907 command_runner.go:130] > Version:          1.24.5
	I0531 19:08:08.594294   71907 command_runner.go:130] > GitCommit:        b007cb6753d97de6218787b6894b0e3cc1dc8ecd
	I0531 19:08:08.594299   71907 command_runner.go:130] > GitTreeState:     clean
	I0531 19:08:08.594306   71907 command_runner.go:130] > BuildDate:        2023-04-04T14:31:22Z
	I0531 19:08:08.594312   71907 command_runner.go:130] > GoVersion:        go1.18.2
	I0531 19:08:08.594322   71907 command_runner.go:130] > Compiler:         gc
	I0531 19:08:08.594328   71907 command_runner.go:130] > Platform:         linux/arm64
	I0531 19:08:08.594336   71907 command_runner.go:130] > Linkmode:         dynamic
	I0531 19:08:08.594345   71907 command_runner.go:130] > BuildTags:        apparmor, exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0531 19:08:08.594350   71907 command_runner.go:130] > SeccompEnabled:   true
	I0531 19:08:08.594358   71907 command_runner.go:130] > AppArmorEnabled:  false
	I0531 19:08:08.599454   71907 out.go:177] * Preparing Kubernetes v1.27.2 on CRI-O 1.24.5 ...
	I0531 19:08:08.601153   71907 cli_runner.go:164] Run: docker network inspect multinode-025078 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0531 19:08:08.619068   71907 ssh_runner.go:195] Run: grep 192.168.58.1	host.minikube.internal$ /etc/hosts
	I0531 19:08:08.623690   71907 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.58.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0531 19:08:08.637123   71907 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime crio
	I0531 19:08:08.637192   71907 ssh_runner.go:195] Run: sudo crictl images --output json
	I0531 19:08:08.708078   71907 command_runner.go:130] > {
	I0531 19:08:08.708096   71907 command_runner.go:130] >   "images": [
	I0531 19:08:08.708102   71907 command_runner.go:130] >     {
	I0531 19:08:08.708111   71907 command_runner.go:130] >       "id": "b18bf71b941bae2e12db1c07e567ad14e4febbc778310a0fc64487f1ac877d79",
	I0531 19:08:08.708117   71907 command_runner.go:130] >       "repoTags": [
	I0531 19:08:08.708126   71907 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20230511-dc714da8"
	I0531 19:08:08.708131   71907 command_runner.go:130] >       ],
	I0531 19:08:08.708136   71907 command_runner.go:130] >       "repoDigests": [
	I0531 19:08:08.708147   71907 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:2c39858b71cf6c5737ff0daa8130a6574d4c6bd2a7dacaf002060c02f2bc1b4f",
	I0531 19:08:08.708156   71907 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:6c00e28db008c2afa67d9ee085c86184ec9ae5281d5ae1bd15006746fb9a1974"
	I0531 19:08:08.708161   71907 command_runner.go:130] >       ],
	I0531 19:08:08.708167   71907 command_runner.go:130] >       "size": "60881430",
	I0531 19:08:08.708172   71907 command_runner.go:130] >       "uid": null,
	I0531 19:08:08.708177   71907 command_runner.go:130] >       "username": "",
	I0531 19:08:08.708183   71907 command_runner.go:130] >       "spec": null,
	I0531 19:08:08.708190   71907 command_runner.go:130] >       "pinned": false
	I0531 19:08:08.708194   71907 command_runner.go:130] >     },
	I0531 19:08:08.708199   71907 command_runner.go:130] >     {
	I0531 19:08:08.708206   71907 command_runner.go:130] >       "id": "ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I0531 19:08:08.708211   71907 command_runner.go:130] >       "repoTags": [
	I0531 19:08:08.708218   71907 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0531 19:08:08.708222   71907 command_runner.go:130] >       ],
	I0531 19:08:08.708228   71907 command_runner.go:130] >       "repoDigests": [
	I0531 19:08:08.708237   71907 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2",
	I0531 19:08:08.708247   71907 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I0531 19:08:08.708252   71907 command_runner.go:130] >       ],
	I0531 19:08:08.708259   71907 command_runner.go:130] >       "size": "29037500",
	I0531 19:08:08.708265   71907 command_runner.go:130] >       "uid": null,
	I0531 19:08:08.708270   71907 command_runner.go:130] >       "username": "",
	I0531 19:08:08.708275   71907 command_runner.go:130] >       "spec": null,
	I0531 19:08:08.708280   71907 command_runner.go:130] >       "pinned": false
	I0531 19:08:08.708284   71907 command_runner.go:130] >     },
	I0531 19:08:08.708289   71907 command_runner.go:130] >     {
	I0531 19:08:08.708296   71907 command_runner.go:130] >       "id": "97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108",
	I0531 19:08:08.708301   71907 command_runner.go:130] >       "repoTags": [
	I0531 19:08:08.708307   71907 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.10.1"
	I0531 19:08:08.708312   71907 command_runner.go:130] >       ],
	I0531 19:08:08.708317   71907 command_runner.go:130] >       "repoDigests": [
	I0531 19:08:08.708326   71907 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:74130b944396a0b0ca9af923ee6e03b08a35d98fc1bbaef4e35cf9acc5599105",
	I0531 19:08:08.708336   71907 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e"
	I0531 19:08:08.708340   71907 command_runner.go:130] >       ],
	I0531 19:08:08.708345   71907 command_runner.go:130] >       "size": "51393451",
	I0531 19:08:08.708350   71907 command_runner.go:130] >       "uid": null,
	I0531 19:08:08.708355   71907 command_runner.go:130] >       "username": "",
	I0531 19:08:08.708360   71907 command_runner.go:130] >       "spec": null,
	I0531 19:08:08.708369   71907 command_runner.go:130] >       "pinned": false
	I0531 19:08:08.708374   71907 command_runner.go:130] >     },
	I0531 19:08:08.708378   71907 command_runner.go:130] >     {
	I0531 19:08:08.708386   71907 command_runner.go:130] >       "id": "24bc64e911039ecf00e263be2161797c758b7d82403ca5516ab64047a477f737",
	I0531 19:08:08.708391   71907 command_runner.go:130] >       "repoTags": [
	I0531 19:08:08.708397   71907 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.7-0"
	I0531 19:08:08.708401   71907 command_runner.go:130] >       ],
	I0531 19:08:08.708406   71907 command_runner.go:130] >       "repoDigests": [
	I0531 19:08:08.708415   71907 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:1c19137e8a1716ce9f66c8c767bf114d7cad975db7a9784146486aa764f6dddd",
	I0531 19:08:08.708424   71907 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83"
	I0531 19:08:08.708431   71907 command_runner.go:130] >       ],
	I0531 19:08:08.708436   71907 command_runner.go:130] >       "size": "182283991",
	I0531 19:08:08.708440   71907 command_runner.go:130] >       "uid": {
	I0531 19:08:08.708445   71907 command_runner.go:130] >         "value": "0"
	I0531 19:08:08.708450   71907 command_runner.go:130] >       },
	I0531 19:08:08.708455   71907 command_runner.go:130] >       "username": "",
	I0531 19:08:08.708459   71907 command_runner.go:130] >       "spec": null,
	I0531 19:08:08.708464   71907 command_runner.go:130] >       "pinned": false
	I0531 19:08:08.708469   71907 command_runner.go:130] >     },
	I0531 19:08:08.708473   71907 command_runner.go:130] >     {
	I0531 19:08:08.708481   71907 command_runner.go:130] >       "id": "72c9df6be7f1b997e4a31b5cb9aa7262e5278905af97e6a69e341e3f0f9bbaae",
	I0531 19:08:08.708486   71907 command_runner.go:130] >       "repoTags": [
	I0531 19:08:08.708492   71907 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.27.2"
	I0531 19:08:08.708497   71907 command_runner.go:130] >       ],
	I0531 19:08:08.708501   71907 command_runner.go:130] >       "repoDigests": [
	I0531 19:08:08.708511   71907 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:599c991fe774036dff5f54b3113290d83da173d7627ea259bd2a3064eaa7987e",
	I0531 19:08:08.708520   71907 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:94e48585629fde3c1d06c6ae8f62885d3052f12a1072ffd97611296525eff5b9"
	I0531 19:08:08.708525   71907 command_runner.go:130] >       ],
	I0531 19:08:08.708530   71907 command_runner.go:130] >       "size": "116138960",
	I0531 19:08:08.708535   71907 command_runner.go:130] >       "uid": {
	I0531 19:08:08.708540   71907 command_runner.go:130] >         "value": "0"
	I0531 19:08:08.708544   71907 command_runner.go:130] >       },
	I0531 19:08:08.708549   71907 command_runner.go:130] >       "username": "",
	I0531 19:08:08.708554   71907 command_runner.go:130] >       "spec": null,
	I0531 19:08:08.708558   71907 command_runner.go:130] >       "pinned": false
	I0531 19:08:08.708563   71907 command_runner.go:130] >     },
	I0531 19:08:08.708567   71907 command_runner.go:130] >     {
	I0531 19:08:08.708575   71907 command_runner.go:130] >       "id": "2ee705380c3c59a538b853cbe9ae9886ebbd0001a4cea4add5adeea48e5f48d4",
	I0531 19:08:08.708580   71907 command_runner.go:130] >       "repoTags": [
	I0531 19:08:08.708586   71907 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.27.2"
	I0531 19:08:08.708590   71907 command_runner.go:130] >       ],
	I0531 19:08:08.708595   71907 command_runner.go:130] >       "repoDigests": [
	I0531 19:08:08.708605   71907 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:6626c27b7df41d86340a701121792c5c0dc40ca8877c23478fc5659103bc7505",
	I0531 19:08:08.708614   71907 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:b0990ef7c9ce9edd0f57355a7e4cb43a71e864bfd2cd55bc68e4998e00213b56"
	I0531 19:08:08.708619   71907 command_runner.go:130] >       ],
	I0531 19:08:08.708625   71907 command_runner.go:130] >       "size": "108667702",
	I0531 19:08:08.708629   71907 command_runner.go:130] >       "uid": {
	I0531 19:08:08.708635   71907 command_runner.go:130] >         "value": "0"
	I0531 19:08:08.708639   71907 command_runner.go:130] >       },
	I0531 19:08:08.708644   71907 command_runner.go:130] >       "username": "",
	I0531 19:08:08.708649   71907 command_runner.go:130] >       "spec": null,
	I0531 19:08:08.708654   71907 command_runner.go:130] >       "pinned": false
	I0531 19:08:08.708658   71907 command_runner.go:130] >     },
	I0531 19:08:08.708663   71907 command_runner.go:130] >     {
	I0531 19:08:08.708671   71907 command_runner.go:130] >       "id": "29921a084542255eb81a1a660a603b1a24636d88b202f9010daae75fa32754c0",
	I0531 19:08:08.708675   71907 command_runner.go:130] >       "repoTags": [
	I0531 19:08:08.708681   71907 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.27.2"
	I0531 19:08:08.708685   71907 command_runner.go:130] >       ],
	I0531 19:08:08.708690   71907 command_runner.go:130] >       "repoDigests": [
	I0531 19:08:08.708700   71907 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:1e4f13f5f5c215813fb9c9c6f56da1c0354363f2a69bd12732658f79d585864f",
	I0531 19:08:08.708709   71907 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:7ebc3b4df29c385197555a543c4a3379cfcdabdfbe37e2b2ea3ceac87ce28bca"
	I0531 19:08:08.708713   71907 command_runner.go:130] >       ],
	I0531 19:08:08.708719   71907 command_runner.go:130] >       "size": "68099991",
	I0531 19:08:08.708724   71907 command_runner.go:130] >       "uid": null,
	I0531 19:08:08.708729   71907 command_runner.go:130] >       "username": "",
	I0531 19:08:08.708733   71907 command_runner.go:130] >       "spec": null,
	I0531 19:08:08.708738   71907 command_runner.go:130] >       "pinned": false
	I0531 19:08:08.708743   71907 command_runner.go:130] >     },
	I0531 19:08:08.708747   71907 command_runner.go:130] >     {
	I0531 19:08:08.708755   71907 command_runner.go:130] >       "id": "305d7ed1dae2877c3a80d434c5fb9f1aac1aa3d2431c36130a3fcd1970e93840",
	I0531 19:08:08.708760   71907 command_runner.go:130] >       "repoTags": [
	I0531 19:08:08.708766   71907 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.27.2"
	I0531 19:08:08.708770   71907 command_runner.go:130] >       ],
	I0531 19:08:08.708775   71907 command_runner.go:130] >       "repoDigests": [
	I0531 19:08:08.708790   71907 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:89e8c591cc58bd0e8651dddee3de290399b1ae5ad14779afe84779083fe05177",
	I0531 19:08:08.708799   71907 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:e0ecd0ce2447789a58ad5e94acda2cff8ad4e6ca3ccc06041b89e7eb0b78a6c4"
	I0531 19:08:08.708804   71907 command_runner.go:130] >       ],
	I0531 19:08:08.708809   71907 command_runner.go:130] >       "size": "57615158",
	I0531 19:08:08.708814   71907 command_runner.go:130] >       "uid": {
	I0531 19:08:08.708818   71907 command_runner.go:130] >         "value": "0"
	I0531 19:08:08.708823   71907 command_runner.go:130] >       },
	I0531 19:08:08.708827   71907 command_runner.go:130] >       "username": "",
	I0531 19:08:08.708832   71907 command_runner.go:130] >       "spec": null,
	I0531 19:08:08.708837   71907 command_runner.go:130] >       "pinned": false
	I0531 19:08:08.708842   71907 command_runner.go:130] >     },
	I0531 19:08:08.708846   71907 command_runner.go:130] >     {
	I0531 19:08:08.708854   71907 command_runner.go:130] >       "id": "829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e",
	I0531 19:08:08.708858   71907 command_runner.go:130] >       "repoTags": [
	I0531 19:08:08.708864   71907 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0531 19:08:08.708869   71907 command_runner.go:130] >       ],
	I0531 19:08:08.708874   71907 command_runner.go:130] >       "repoDigests": [
	I0531 19:08:08.708884   71907 command_runner.go:130] >         "registry.k8s.io/pause@sha256:3ec98b8452dc8ae265a6917dfb81587ac78849e520d5dbba6de524851d20eca6",
	I0531 19:08:08.708893   71907 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097"
	I0531 19:08:08.708898   71907 command_runner.go:130] >       ],
	I0531 19:08:08.708903   71907 command_runner.go:130] >       "size": "520014",
	I0531 19:08:08.708908   71907 command_runner.go:130] >       "uid": {
	I0531 19:08:08.708913   71907 command_runner.go:130] >         "value": "65535"
	I0531 19:08:08.708917   71907 command_runner.go:130] >       },
	I0531 19:08:08.708922   71907 command_runner.go:130] >       "username": "",
	I0531 19:08:08.708926   71907 command_runner.go:130] >       "spec": null,
	I0531 19:08:08.708932   71907 command_runner.go:130] >       "pinned": false
	I0531 19:08:08.708936   71907 command_runner.go:130] >     }
	I0531 19:08:08.708940   71907 command_runner.go:130] >   ]
	I0531 19:08:08.708943   71907 command_runner.go:130] > }
	I0531 19:08:08.709134   71907 crio.go:496] all images are preloaded for cri-o runtime.
	I0531 19:08:08.709143   71907 crio.go:415] Images already preloaded, skipping extraction
	I0531 19:08:08.709197   71907 ssh_runner.go:195] Run: sudo crictl images --output json
	I0531 19:08:08.752190   71907 command_runner.go:130] > {
	I0531 19:08:08.752208   71907 command_runner.go:130] >   "images": [
	I0531 19:08:08.752214   71907 command_runner.go:130] >     {
	I0531 19:08:08.752223   71907 command_runner.go:130] >       "id": "b18bf71b941bae2e12db1c07e567ad14e4febbc778310a0fc64487f1ac877d79",
	I0531 19:08:08.752246   71907 command_runner.go:130] >       "repoTags": [
	I0531 19:08:08.752255   71907 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20230511-dc714da8"
	I0531 19:08:08.752266   71907 command_runner.go:130] >       ],
	I0531 19:08:08.752279   71907 command_runner.go:130] >       "repoDigests": [
	I0531 19:08:08.752289   71907 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:2c39858b71cf6c5737ff0daa8130a6574d4c6bd2a7dacaf002060c02f2bc1b4f",
	I0531 19:08:08.752303   71907 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:6c00e28db008c2afa67d9ee085c86184ec9ae5281d5ae1bd15006746fb9a1974"
	I0531 19:08:08.752307   71907 command_runner.go:130] >       ],
	I0531 19:08:08.752324   71907 command_runner.go:130] >       "size": "60881430",
	I0531 19:08:08.752334   71907 command_runner.go:130] >       "uid": null,
	I0531 19:08:08.752339   71907 command_runner.go:130] >       "username": "",
	I0531 19:08:08.752347   71907 command_runner.go:130] >       "spec": null,
	I0531 19:08:08.752352   71907 command_runner.go:130] >       "pinned": false
	I0531 19:08:08.752356   71907 command_runner.go:130] >     },
	I0531 19:08:08.752361   71907 command_runner.go:130] >     {
	I0531 19:08:08.752368   71907 command_runner.go:130] >       "id": "ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I0531 19:08:08.752377   71907 command_runner.go:130] >       "repoTags": [
	I0531 19:08:08.752383   71907 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0531 19:08:08.752398   71907 command_runner.go:130] >       ],
	I0531 19:08:08.752403   71907 command_runner.go:130] >       "repoDigests": [
	I0531 19:08:08.752417   71907 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2",
	I0531 19:08:08.752428   71907 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I0531 19:08:08.752433   71907 command_runner.go:130] >       ],
	I0531 19:08:08.752440   71907 command_runner.go:130] >       "size": "29037500",
	I0531 19:08:08.752446   71907 command_runner.go:130] >       "uid": null,
	I0531 19:08:08.752451   71907 command_runner.go:130] >       "username": "",
	I0531 19:08:08.752459   71907 command_runner.go:130] >       "spec": null,
	I0531 19:08:08.752470   71907 command_runner.go:130] >       "pinned": false
	I0531 19:08:08.752478   71907 command_runner.go:130] >     },
	I0531 19:08:08.752482   71907 command_runner.go:130] >     {
	I0531 19:08:08.752490   71907 command_runner.go:130] >       "id": "97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108",
	I0531 19:08:08.752500   71907 command_runner.go:130] >       "repoTags": [
	I0531 19:08:08.752506   71907 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.10.1"
	I0531 19:08:08.752513   71907 command_runner.go:130] >       ],
	I0531 19:08:08.752518   71907 command_runner.go:130] >       "repoDigests": [
	I0531 19:08:08.752529   71907 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:74130b944396a0b0ca9af923ee6e03b08a35d98fc1bbaef4e35cf9acc5599105",
	I0531 19:08:08.752547   71907 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e"
	I0531 19:08:08.752552   71907 command_runner.go:130] >       ],
	I0531 19:08:08.752557   71907 command_runner.go:130] >       "size": "51393451",
	I0531 19:08:08.752566   71907 command_runner.go:130] >       "uid": null,
	I0531 19:08:08.752574   71907 command_runner.go:130] >       "username": "",
	I0531 19:08:08.752578   71907 command_runner.go:130] >       "spec": null,
	I0531 19:08:08.752583   71907 command_runner.go:130] >       "pinned": false
	I0531 19:08:08.752590   71907 command_runner.go:130] >     },
	I0531 19:08:08.752595   71907 command_runner.go:130] >     {
	I0531 19:08:08.752603   71907 command_runner.go:130] >       "id": "24bc64e911039ecf00e263be2161797c758b7d82403ca5516ab64047a477f737",
	I0531 19:08:08.752617   71907 command_runner.go:130] >       "repoTags": [
	I0531 19:08:08.752626   71907 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.7-0"
	I0531 19:08:08.752633   71907 command_runner.go:130] >       ],
	I0531 19:08:08.752638   71907 command_runner.go:130] >       "repoDigests": [
	I0531 19:08:08.752647   71907 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:1c19137e8a1716ce9f66c8c767bf114d7cad975db7a9784146486aa764f6dddd",
	I0531 19:08:08.752658   71907 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83"
	I0531 19:08:08.752668   71907 command_runner.go:130] >       ],
	I0531 19:08:08.752675   71907 command_runner.go:130] >       "size": "182283991",
	I0531 19:08:08.752682   71907 command_runner.go:130] >       "uid": {
	I0531 19:08:08.752693   71907 command_runner.go:130] >         "value": "0"
	I0531 19:08:08.752701   71907 command_runner.go:130] >       },
	I0531 19:08:08.752707   71907 command_runner.go:130] >       "username": "",
	I0531 19:08:08.752715   71907 command_runner.go:130] >       "spec": null,
	I0531 19:08:08.752720   71907 command_runner.go:130] >       "pinned": false
	I0531 19:08:08.752725   71907 command_runner.go:130] >     },
	I0531 19:08:08.752730   71907 command_runner.go:130] >     {
	I0531 19:08:08.752740   71907 command_runner.go:130] >       "id": "72c9df6be7f1b997e4a31b5cb9aa7262e5278905af97e6a69e341e3f0f9bbaae",
	I0531 19:08:08.752747   71907 command_runner.go:130] >       "repoTags": [
	I0531 19:08:08.752754   71907 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.27.2"
	I0531 19:08:08.752767   71907 command_runner.go:130] >       ],
	I0531 19:08:08.752776   71907 command_runner.go:130] >       "repoDigests": [
	I0531 19:08:08.752785   71907 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:599c991fe774036dff5f54b3113290d83da173d7627ea259bd2a3064eaa7987e",
	I0531 19:08:08.752798   71907 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:94e48585629fde3c1d06c6ae8f62885d3052f12a1072ffd97611296525eff5b9"
	I0531 19:08:08.752806   71907 command_runner.go:130] >       ],
	I0531 19:08:08.752811   71907 command_runner.go:130] >       "size": "116138960",
	I0531 19:08:08.752816   71907 command_runner.go:130] >       "uid": {
	I0531 19:08:08.752824   71907 command_runner.go:130] >         "value": "0"
	I0531 19:08:08.752828   71907 command_runner.go:130] >       },
	I0531 19:08:08.752841   71907 command_runner.go:130] >       "username": "",
	I0531 19:08:08.752850   71907 command_runner.go:130] >       "spec": null,
	I0531 19:08:08.752856   71907 command_runner.go:130] >       "pinned": false
	I0531 19:08:08.752862   71907 command_runner.go:130] >     },
	I0531 19:08:08.752866   71907 command_runner.go:130] >     {
	I0531 19:08:08.752875   71907 command_runner.go:130] >       "id": "2ee705380c3c59a538b853cbe9ae9886ebbd0001a4cea4add5adeea48e5f48d4",
	I0531 19:08:08.752883   71907 command_runner.go:130] >       "repoTags": [
	I0531 19:08:08.752890   71907 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.27.2"
	I0531 19:08:08.752897   71907 command_runner.go:130] >       ],
	I0531 19:08:08.752902   71907 command_runner.go:130] >       "repoDigests": [
	I0531 19:08:08.752918   71907 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:6626c27b7df41d86340a701121792c5c0dc40ca8877c23478fc5659103bc7505",
	I0531 19:08:08.752931   71907 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:b0990ef7c9ce9edd0f57355a7e4cb43a71e864bfd2cd55bc68e4998e00213b56"
	I0531 19:08:08.752936   71907 command_runner.go:130] >       ],
	I0531 19:08:08.752946   71907 command_runner.go:130] >       "size": "108667702",
	I0531 19:08:08.752951   71907 command_runner.go:130] >       "uid": {
	I0531 19:08:08.752959   71907 command_runner.go:130] >         "value": "0"
	I0531 19:08:08.752964   71907 command_runner.go:130] >       },
	I0531 19:08:08.752970   71907 command_runner.go:130] >       "username": "",
	I0531 19:08:08.752977   71907 command_runner.go:130] >       "spec": null,
	I0531 19:08:08.752988   71907 command_runner.go:130] >       "pinned": false
	I0531 19:08:08.752996   71907 command_runner.go:130] >     },
	I0531 19:08:08.753001   71907 command_runner.go:130] >     {
	I0531 19:08:08.753009   71907 command_runner.go:130] >       "id": "29921a084542255eb81a1a660a603b1a24636d88b202f9010daae75fa32754c0",
	I0531 19:08:08.753016   71907 command_runner.go:130] >       "repoTags": [
	I0531 19:08:08.753023   71907 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.27.2"
	I0531 19:08:08.753027   71907 command_runner.go:130] >       ],
	I0531 19:08:08.753037   71907 command_runner.go:130] >       "repoDigests": [
	I0531 19:08:08.753046   71907 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:1e4f13f5f5c215813fb9c9c6f56da1c0354363f2a69bd12732658f79d585864f",
	I0531 19:08:08.753064   71907 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:7ebc3b4df29c385197555a543c4a3379cfcdabdfbe37e2b2ea3ceac87ce28bca"
	I0531 19:08:08.753072   71907 command_runner.go:130] >       ],
	I0531 19:08:08.753078   71907 command_runner.go:130] >       "size": "68099991",
	I0531 19:08:08.753084   71907 command_runner.go:130] >       "uid": null,
	I0531 19:08:08.753090   71907 command_runner.go:130] >       "username": "",
	I0531 19:08:08.753097   71907 command_runner.go:130] >       "spec": null,
	I0531 19:08:08.753102   71907 command_runner.go:130] >       "pinned": false
	I0531 19:08:08.753106   71907 command_runner.go:130] >     },
	I0531 19:08:08.753111   71907 command_runner.go:130] >     {
	I0531 19:08:08.753127   71907 command_runner.go:130] >       "id": "305d7ed1dae2877c3a80d434c5fb9f1aac1aa3d2431c36130a3fcd1970e93840",
	I0531 19:08:08.753140   71907 command_runner.go:130] >       "repoTags": [
	I0531 19:08:08.753162   71907 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.27.2"
	I0531 19:08:08.753170   71907 command_runner.go:130] >       ],
	I0531 19:08:08.753175   71907 command_runner.go:130] >       "repoDigests": [
	I0531 19:08:08.753227   71907 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:89e8c591cc58bd0e8651dddee3de290399b1ae5ad14779afe84779083fe05177",
	I0531 19:08:08.753240   71907 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:e0ecd0ce2447789a58ad5e94acda2cff8ad4e6ca3ccc06041b89e7eb0b78a6c4"
	I0531 19:08:08.753245   71907 command_runner.go:130] >       ],
	I0531 19:08:08.753250   71907 command_runner.go:130] >       "size": "57615158",
	I0531 19:08:08.753256   71907 command_runner.go:130] >       "uid": {
	I0531 19:08:08.753261   71907 command_runner.go:130] >         "value": "0"
	I0531 19:08:08.753266   71907 command_runner.go:130] >       },
	I0531 19:08:08.753272   71907 command_runner.go:130] >       "username": "",
	I0531 19:08:08.753278   71907 command_runner.go:130] >       "spec": null,
	I0531 19:08:08.753293   71907 command_runner.go:130] >       "pinned": false
	I0531 19:08:08.753302   71907 command_runner.go:130] >     },
	I0531 19:08:08.753307   71907 command_runner.go:130] >     {
	I0531 19:08:08.753314   71907 command_runner.go:130] >       "id": "829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e",
	I0531 19:08:08.753323   71907 command_runner.go:130] >       "repoTags": [
	I0531 19:08:08.753329   71907 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0531 19:08:08.753333   71907 command_runner.go:130] >       ],
	I0531 19:08:08.753338   71907 command_runner.go:130] >       "repoDigests": [
	I0531 19:08:08.753347   71907 command_runner.go:130] >         "registry.k8s.io/pause@sha256:3ec98b8452dc8ae265a6917dfb81587ac78849e520d5dbba6de524851d20eca6",
	I0531 19:08:08.753365   71907 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097"
	I0531 19:08:08.753370   71907 command_runner.go:130] >       ],
	I0531 19:08:08.753378   71907 command_runner.go:130] >       "size": "520014",
	I0531 19:08:08.753383   71907 command_runner.go:130] >       "uid": {
	I0531 19:08:08.753391   71907 command_runner.go:130] >         "value": "65535"
	I0531 19:08:08.753399   71907 command_runner.go:130] >       },
	I0531 19:08:08.753404   71907 command_runner.go:130] >       "username": "",
	I0531 19:08:08.753409   71907 command_runner.go:130] >       "spec": null,
	I0531 19:08:08.753418   71907 command_runner.go:130] >       "pinned": false
	I0531 19:08:08.753423   71907 command_runner.go:130] >     }
	I0531 19:08:08.753426   71907 command_runner.go:130] >   ]
	I0531 19:08:08.753431   71907 command_runner.go:130] > }
	I0531 19:08:08.756195   71907 crio.go:496] all images are preloaded for cri-o runtime.
	I0531 19:08:08.756218   71907 cache_images.go:84] Images are preloaded, skipping loading
	I0531 19:08:08.756290   71907 ssh_runner.go:195] Run: crio config
	I0531 19:08:08.809477   71907 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0531 19:08:08.809511   71907 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0531 19:08:08.809520   71907 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0531 19:08:08.809524   71907 command_runner.go:130] > #
	I0531 19:08:08.809533   71907 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0531 19:08:08.809541   71907 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0531 19:08:08.809548   71907 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0531 19:08:08.809571   71907 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0531 19:08:08.809577   71907 command_runner.go:130] > # reload'.
	I0531 19:08:08.809584   71907 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0531 19:08:08.809605   71907 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0531 19:08:08.809613   71907 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0531 19:08:08.809624   71907 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0531 19:08:08.809631   71907 command_runner.go:130] > [crio]
	I0531 19:08:08.809646   71907 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0531 19:08:08.809653   71907 command_runner.go:130] > # containers images, in this directory.
	I0531 19:08:08.809673   71907 command_runner.go:130] > # root = "/home/docker/.local/share/containers/storage"
	I0531 19:08:08.809681   71907 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0531 19:08:08.809695   71907 command_runner.go:130] > # runroot = "/tmp/containers-user-1000/containers"
	I0531 19:08:08.809703   71907 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0531 19:08:08.809711   71907 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0531 19:08:08.809716   71907 command_runner.go:130] > # storage_driver = "vfs"
	I0531 19:08:08.809725   71907 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0531 19:08:08.809733   71907 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0531 19:08:08.809738   71907 command_runner.go:130] > # storage_option = [
	I0531 19:08:08.809742   71907 command_runner.go:130] > # ]
	I0531 19:08:08.809750   71907 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0531 19:08:08.809760   71907 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0531 19:08:08.809766   71907 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0531 19:08:08.809773   71907 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0531 19:08:08.809780   71907 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0531 19:08:08.809788   71907 command_runner.go:130] > # always happen on a node reboot
	I0531 19:08:08.809795   71907 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0531 19:08:08.809808   71907 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0531 19:08:08.809816   71907 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0531 19:08:08.809831   71907 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0531 19:08:08.809838   71907 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I0531 19:08:08.809863   71907 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0531 19:08:08.809873   71907 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0531 19:08:08.809878   71907 command_runner.go:130] > # internal_wipe = true
	I0531 19:08:08.809885   71907 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0531 19:08:08.809892   71907 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0531 19:08:08.809899   71907 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0531 19:08:08.810356   71907 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0531 19:08:08.810388   71907 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0531 19:08:08.810394   71907 command_runner.go:130] > [crio.api]
	I0531 19:08:08.810401   71907 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0531 19:08:08.810412   71907 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0531 19:08:08.810418   71907 command_runner.go:130] > # IP address on which the stream server will listen.
	I0531 19:08:08.810424   71907 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0531 19:08:08.810434   71907 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0531 19:08:08.810441   71907 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0531 19:08:08.810446   71907 command_runner.go:130] > # stream_port = "0"
	I0531 19:08:08.810455   71907 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0531 19:08:08.810461   71907 command_runner.go:130] > # stream_enable_tls = false
	I0531 19:08:08.810468   71907 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0531 19:08:08.810473   71907 command_runner.go:130] > # stream_idle_timeout = ""
	I0531 19:08:08.810480   71907 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0531 19:08:08.810487   71907 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0531 19:08:08.810492   71907 command_runner.go:130] > # minutes.
	I0531 19:08:08.810497   71907 command_runner.go:130] > # stream_tls_cert = ""
	I0531 19:08:08.810506   71907 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0531 19:08:08.810513   71907 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0531 19:08:08.810518   71907 command_runner.go:130] > # stream_tls_key = ""
	I0531 19:08:08.810525   71907 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0531 19:08:08.810533   71907 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0531 19:08:08.810544   71907 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0531 19:08:08.810549   71907 command_runner.go:130] > # stream_tls_ca = ""
	I0531 19:08:08.810565   71907 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I0531 19:08:08.810571   71907 command_runner.go:130] > # grpc_max_send_msg_size = 83886080
	I0531 19:08:08.810580   71907 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I0531 19:08:08.810585   71907 command_runner.go:130] > # grpc_max_recv_msg_size = 83886080
	I0531 19:08:08.810601   71907 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0531 19:08:08.810608   71907 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0531 19:08:08.810615   71907 command_runner.go:130] > [crio.runtime]
	I0531 19:08:08.810625   71907 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0531 19:08:08.810635   71907 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0531 19:08:08.810642   71907 command_runner.go:130] > # "nofile=1024:2048"
	I0531 19:08:08.810650   71907 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0531 19:08:08.810657   71907 command_runner.go:130] > # default_ulimits = [
	I0531 19:08:08.810661   71907 command_runner.go:130] > # ]
	I0531 19:08:08.810668   71907 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0531 19:08:08.810680   71907 command_runner.go:130] > # no_pivot = false
	I0531 19:08:08.810688   71907 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0531 19:08:08.810698   71907 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0531 19:08:08.810712   71907 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0531 19:08:08.810720   71907 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0531 19:08:08.810748   71907 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0531 19:08:08.810761   71907 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0531 19:08:08.810766   71907 command_runner.go:130] > # conmon = ""
	I0531 19:08:08.810774   71907 command_runner.go:130] > # Cgroup setting for conmon
	I0531 19:08:08.810785   71907 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0531 19:08:08.810792   71907 command_runner.go:130] > conmon_cgroup = "pod"
	I0531 19:08:08.810799   71907 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0531 19:08:08.810806   71907 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0531 19:08:08.810814   71907 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0531 19:08:08.810819   71907 command_runner.go:130] > # conmon_env = [
	I0531 19:08:08.810824   71907 command_runner.go:130] > # ]
	I0531 19:08:08.810830   71907 command_runner.go:130] > # Additional environment variables to set for all the
	I0531 19:08:08.810836   71907 command_runner.go:130] > # containers. These are overridden if set in the
	I0531 19:08:08.810848   71907 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0531 19:08:08.810853   71907 command_runner.go:130] > # default_env = [
	I0531 19:08:08.810857   71907 command_runner.go:130] > # ]
	I0531 19:08:08.810863   71907 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0531 19:08:08.810875   71907 command_runner.go:130] > # selinux = false
	I0531 19:08:08.810886   71907 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0531 19:08:08.810902   71907 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0531 19:08:08.810914   71907 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0531 19:08:08.810919   71907 command_runner.go:130] > # seccomp_profile = ""
	I0531 19:08:08.810926   71907 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0531 19:08:08.810933   71907 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0531 19:08:08.810949   71907 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0531 19:08:08.810954   71907 command_runner.go:130] > # which might increase security.
	I0531 19:08:08.810965   71907 command_runner.go:130] > # seccomp_use_default_when_empty = true
	I0531 19:08:08.810980   71907 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0531 19:08:08.810988   71907 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0531 19:08:08.810999   71907 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0531 19:08:08.811009   71907 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0531 19:08:08.811016   71907 command_runner.go:130] > # This option supports live configuration reload.
	I0531 19:08:08.811022   71907 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0531 19:08:08.811032   71907 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0531 19:08:08.811038   71907 command_runner.go:130] > # the cgroup blockio controller.
	I0531 19:08:08.811069   71907 command_runner.go:130] > # blockio_config_file = ""
	I0531 19:08:08.811083   71907 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0531 19:08:08.811096   71907 command_runner.go:130] > # irqbalance daemon.
	I0531 19:08:08.811103   71907 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0531 19:08:08.811114   71907 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0531 19:08:08.811124   71907 command_runner.go:130] > # This option supports live configuration reload.
	I0531 19:08:08.811134   71907 command_runner.go:130] > # rdt_config_file = ""
	I0531 19:08:08.811141   71907 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0531 19:08:08.811147   71907 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0531 19:08:08.811157   71907 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0531 19:08:08.811165   71907 command_runner.go:130] > # separate_pull_cgroup = ""
	I0531 19:08:08.811174   71907 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0531 19:08:08.811185   71907 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0531 19:08:08.811190   71907 command_runner.go:130] > # will be added.
	I0531 19:08:08.811194   71907 command_runner.go:130] > # default_capabilities = [
	I0531 19:08:08.811200   71907 command_runner.go:130] > # 	"CHOWN",
	I0531 19:08:08.811204   71907 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0531 19:08:08.811209   71907 command_runner.go:130] > # 	"FSETID",
	I0531 19:08:08.811224   71907 command_runner.go:130] > # 	"FOWNER",
	I0531 19:08:08.811232   71907 command_runner.go:130] > # 	"SETGID",
	I0531 19:08:08.811237   71907 command_runner.go:130] > # 	"SETUID",
	I0531 19:08:08.811241   71907 command_runner.go:130] > # 	"SETPCAP",
	I0531 19:08:08.811247   71907 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0531 19:08:08.811259   71907 command_runner.go:130] > # 	"KILL",
	I0531 19:08:08.811263   71907 command_runner.go:130] > # ]
	I0531 19:08:08.811275   71907 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0531 19:08:08.811287   71907 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0531 19:08:08.811293   71907 command_runner.go:130] > # add_inheritable_capabilities = true
	I0531 19:08:08.811304   71907 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0531 19:08:08.811312   71907 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0531 19:08:08.811316   71907 command_runner.go:130] > # default_sysctls = [
	I0531 19:08:08.811321   71907 command_runner.go:130] > # ]
	I0531 19:08:08.811327   71907 command_runner.go:130] > # List of devices on the host that a
	I0531 19:08:08.811334   71907 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0531 19:08:08.811339   71907 command_runner.go:130] > # allowed_devices = [
	I0531 19:08:08.811344   71907 command_runner.go:130] > # 	"/dev/fuse",
	I0531 19:08:08.811354   71907 command_runner.go:130] > # ]
	I0531 19:08:08.811360   71907 command_runner.go:130] > # List of additional devices. specified as
	I0531 19:08:08.811378   71907 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0531 19:08:08.811388   71907 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0531 19:08:08.811396   71907 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0531 19:08:08.811403   71907 command_runner.go:130] > # additional_devices = [
	I0531 19:08:08.811407   71907 command_runner.go:130] > # ]
	I0531 19:08:08.811416   71907 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0531 19:08:08.811425   71907 command_runner.go:130] > # cdi_spec_dirs = [
	I0531 19:08:08.811435   71907 command_runner.go:130] > # 	"/etc/cdi",
	I0531 19:08:08.811446   71907 command_runner.go:130] > # 	"/var/run/cdi",
	I0531 19:08:08.811451   71907 command_runner.go:130] > # ]
	I0531 19:08:08.811458   71907 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0531 19:08:08.811466   71907 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0531 19:08:08.811470   71907 command_runner.go:130] > # Defaults to false.
	I0531 19:08:08.811479   71907 command_runner.go:130] > # device_ownership_from_security_context = false
	I0531 19:08:08.811488   71907 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0531 19:08:08.811495   71907 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0531 19:08:08.811500   71907 command_runner.go:130] > # hooks_dir = [
	I0531 19:08:08.811509   71907 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0531 19:08:08.811514   71907 command_runner.go:130] > # ]
	I0531 19:08:08.811524   71907 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0531 19:08:08.811532   71907 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0531 19:08:08.811538   71907 command_runner.go:130] > # its default mounts from the following two files:
	I0531 19:08:08.811546   71907 command_runner.go:130] > #
	I0531 19:08:08.811554   71907 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0531 19:08:08.811562   71907 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0531 19:08:08.811568   71907 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0531 19:08:08.811572   71907 command_runner.go:130] > #
	I0531 19:08:08.811580   71907 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0531 19:08:08.811594   71907 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0531 19:08:08.811603   71907 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0531 19:08:08.811612   71907 command_runner.go:130] > #      only add mounts it finds in this file.
	I0531 19:08:08.811620   71907 command_runner.go:130] > #
	I0531 19:08:08.811626   71907 command_runner.go:130] > # default_mounts_file = ""
	I0531 19:08:08.811635   71907 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0531 19:08:08.811647   71907 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0531 19:08:08.811652   71907 command_runner.go:130] > # pids_limit = 0
	I0531 19:08:08.811660   71907 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0531 19:08:08.811673   71907 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0531 19:08:08.811690   71907 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0531 19:08:08.811704   71907 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0531 19:08:08.811713   71907 command_runner.go:130] > # log_size_max = -1
	I0531 19:08:08.811728   71907 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I0531 19:08:08.811742   71907 command_runner.go:130] > # log_to_journald = false
	I0531 19:08:08.811753   71907 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0531 19:08:08.811759   71907 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0531 19:08:08.811772   71907 command_runner.go:130] > # Path to directory for container attach sockets.
	I0531 19:08:08.811779   71907 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0531 19:08:08.811786   71907 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0531 19:08:08.811790   71907 command_runner.go:130] > # bind_mount_prefix = ""
	I0531 19:08:08.811797   71907 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0531 19:08:08.811802   71907 command_runner.go:130] > # read_only = false
	I0531 19:08:08.811842   71907 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0531 19:08:08.811851   71907 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0531 19:08:08.811856   71907 command_runner.go:130] > # live configuration reload.
	I0531 19:08:08.811861   71907 command_runner.go:130] > # log_level = "info"
	I0531 19:08:08.811882   71907 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0531 19:08:08.811892   71907 command_runner.go:130] > # This option supports live configuration reload.
	I0531 19:08:08.811896   71907 command_runner.go:130] > # log_filter = ""
	I0531 19:08:08.811904   71907 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0531 19:08:08.811914   71907 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0531 19:08:08.811924   71907 command_runner.go:130] > # separated by comma.
	I0531 19:08:08.812487   71907 command_runner.go:130] > # uid_mappings = ""
	I0531 19:08:08.812557   71907 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0531 19:08:08.812585   71907 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0531 19:08:08.812622   71907 command_runner.go:130] > # separated by comma.
	I0531 19:08:08.812662   71907 command_runner.go:130] > # gid_mappings = ""
	I0531 19:08:08.812691   71907 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0531 19:08:08.812739   71907 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0531 19:08:08.812769   71907 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0531 19:08:08.812812   71907 command_runner.go:130] > # minimum_mappable_uid = -1
	I0531 19:08:08.812850   71907 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0531 19:08:08.812901   71907 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0531 19:08:08.812933   71907 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0531 19:08:08.812970   71907 command_runner.go:130] > # minimum_mappable_gid = -1
	I0531 19:08:08.813008   71907 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0531 19:08:08.813030   71907 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0531 19:08:08.813080   71907 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0531 19:08:08.813112   71907 command_runner.go:130] > # ctr_stop_timeout = 30
	I0531 19:08:08.813148   71907 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0531 19:08:08.813200   71907 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0531 19:08:08.813240   71907 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0531 19:08:08.813270   71907 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0531 19:08:08.813321   71907 command_runner.go:130] > # drop_infra_ctr = true
	I0531 19:08:08.813357   71907 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0531 19:08:08.813379   71907 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0531 19:08:08.813423   71907 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0531 19:08:08.813494   71907 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0531 19:08:08.813537   71907 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0531 19:08:08.813572   71907 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0531 19:08:08.813601   71907 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0531 19:08:08.813625   71907 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0531 19:08:08.813661   71907 command_runner.go:130] > # pinns_path = ""
	I0531 19:08:08.813713   71907 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0531 19:08:08.813740   71907 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I0531 19:08:08.813776   71907 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I0531 19:08:08.813806   71907 command_runner.go:130] > # default_runtime = "runc"
	I0531 19:08:08.813831   71907 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0531 19:08:08.813887   71907 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0531 19:08:08.813922   71907 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I0531 19:08:08.813947   71907 command_runner.go:130] > # creation as a file is not desired either.
	I0531 19:08:08.813996   71907 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0531 19:08:08.814029   71907 command_runner.go:130] > # the hostname is being managed dynamically.
	I0531 19:08:08.814055   71907 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0531 19:08:08.814085   71907 command_runner.go:130] > # ]
	I0531 19:08:08.814129   71907 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0531 19:08:08.814164   71907 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0531 19:08:08.814196   71907 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I0531 19:08:08.814246   71907 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I0531 19:08:08.814275   71907 command_runner.go:130] > #
	I0531 19:08:08.814306   71907 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I0531 19:08:08.814334   71907 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I0531 19:08:08.814361   71907 command_runner.go:130] > #  runtime_type = "oci"
	I0531 19:08:08.814408   71907 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I0531 19:08:08.814433   71907 command_runner.go:130] > #  privileged_without_host_devices = false
	I0531 19:08:08.814457   71907 command_runner.go:130] > #  allowed_annotations = []
	I0531 19:08:08.814499   71907 command_runner.go:130] > # Where:
	I0531 19:08:08.814540   71907 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I0531 19:08:08.814582   71907 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I0531 19:08:08.814624   71907 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0531 19:08:08.814664   71907 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0531 19:08:08.814690   71907 command_runner.go:130] > #   in $PATH.
	I0531 19:08:08.814727   71907 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I0531 19:08:08.814773   71907 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0531 19:08:08.814798   71907 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I0531 19:08:08.814822   71907 command_runner.go:130] > #   state.
	I0531 19:08:08.814987   71907 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0531 19:08:08.815044   71907 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0531 19:08:08.815077   71907 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0531 19:08:08.815120   71907 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0531 19:08:08.815158   71907 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0531 19:08:08.815198   71907 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0531 19:08:08.815240   71907 command_runner.go:130] > #   The currently recognized values are:
	I0531 19:08:08.815272   71907 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0531 19:08:08.815312   71907 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0531 19:08:08.815343   71907 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0531 19:08:08.815391   71907 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0531 19:08:08.815423   71907 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0531 19:08:08.815467   71907 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0531 19:08:08.815515   71907 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0531 19:08:08.815557   71907 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I0531 19:08:08.815597   71907 command_runner.go:130] > #   should be moved to the container's cgroup
	I0531 19:08:08.815646   71907 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0531 19:08:08.815677   71907 command_runner.go:130] > runtime_path = "/usr/lib/cri-o-runc/sbin/runc"
	I0531 19:08:08.815723   71907 command_runner.go:130] > runtime_type = "oci"
	I0531 19:08:08.815753   71907 command_runner.go:130] > runtime_root = "/run/runc"
	I0531 19:08:08.815779   71907 command_runner.go:130] > runtime_config_path = ""
	I0531 19:08:08.815806   71907 command_runner.go:130] > monitor_path = ""
	I0531 19:08:08.815840   71907 command_runner.go:130] > monitor_cgroup = ""
	I0531 19:08:08.815879   71907 command_runner.go:130] > monitor_exec_cgroup = ""
	I0531 19:08:08.815935   71907 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I0531 19:08:08.815967   71907 command_runner.go:130] > # running containers
	I0531 19:08:08.816010   71907 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I0531 19:08:08.816046   71907 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I0531 19:08:08.816103   71907 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I0531 19:08:08.816428   71907 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I0531 19:08:08.816587   71907 command_runner.go:130] > # Kata Containers with the default configured VMM
	I0531 19:08:08.816717   71907 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I0531 19:08:08.816732   71907 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I0531 19:08:08.816738   71907 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I0531 19:08:08.816747   71907 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I0531 19:08:08.816759   71907 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I0531 19:08:08.816774   71907 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0531 19:08:08.816785   71907 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0531 19:08:08.816803   71907 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0531 19:08:08.816825   71907 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0531 19:08:08.816843   71907 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0531 19:08:08.816850   71907 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0531 19:08:08.816864   71907 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0531 19:08:08.816880   71907 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0531 19:08:08.816890   71907 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0531 19:08:08.816903   71907 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0531 19:08:08.816911   71907 command_runner.go:130] > # Example:
	I0531 19:08:08.816920   71907 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0531 19:08:08.816934   71907 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0531 19:08:08.816946   71907 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0531 19:08:08.816961   71907 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0531 19:08:08.816966   71907 command_runner.go:130] > # cpuset = 0
	I0531 19:08:08.816991   71907 command_runner.go:130] > # cpushares = "0-1"
	I0531 19:08:08.816998   71907 command_runner.go:130] > # Where:
	I0531 19:08:08.817004   71907 command_runner.go:130] > # The workload name is workload-type.
	I0531 19:08:08.817016   71907 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0531 19:08:08.817027   71907 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0531 19:08:08.817039   71907 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0531 19:08:08.817052   71907 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0531 19:08:08.817064   71907 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0531 19:08:08.817069   71907 command_runner.go:130] > # 
	I0531 19:08:08.817077   71907 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0531 19:08:08.817088   71907 command_runner.go:130] > #
	I0531 19:08:08.817105   71907 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0531 19:08:08.817119   71907 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0531 19:08:08.817132   71907 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0531 19:08:08.817146   71907 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0531 19:08:08.817168   71907 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0531 19:08:08.817176   71907 command_runner.go:130] > [crio.image]
	I0531 19:08:08.817190   71907 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0531 19:08:08.817201   71907 command_runner.go:130] > # default_transport = "docker://"
	I0531 19:08:08.817212   71907 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0531 19:08:08.817220   71907 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0531 19:08:08.817228   71907 command_runner.go:130] > # global_auth_file = ""
	I0531 19:08:08.817237   71907 command_runner.go:130] > # The image used to instantiate infra containers.
	I0531 19:08:08.817243   71907 command_runner.go:130] > # This option supports live configuration reload.
	I0531 19:08:08.817250   71907 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I0531 19:08:08.817262   71907 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0531 19:08:08.817274   71907 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0531 19:08:08.817284   71907 command_runner.go:130] > # This option supports live configuration reload.
	I0531 19:08:08.817289   71907 command_runner.go:130] > # pause_image_auth_file = ""
	I0531 19:08:08.817302   71907 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0531 19:08:08.817317   71907 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0531 19:08:08.817332   71907 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0531 19:08:08.817340   71907 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0531 19:08:08.817348   71907 command_runner.go:130] > # pause_command = "/pause"
	I0531 19:08:08.817355   71907 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0531 19:08:08.817372   71907 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0531 19:08:08.817380   71907 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0531 19:08:08.817396   71907 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0531 19:08:08.817409   71907 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0531 19:08:08.817414   71907 command_runner.go:130] > # signature_policy = ""
	I0531 19:08:08.817427   71907 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0531 19:08:08.817451   71907 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0531 19:08:08.817462   71907 command_runner.go:130] > # changing them here.
	I0531 19:08:08.817478   71907 command_runner.go:130] > # insecure_registries = [
	I0531 19:08:08.817485   71907 command_runner.go:130] > # ]
	I0531 19:08:08.817496   71907 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0531 19:08:08.817506   71907 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0531 19:08:08.817513   71907 command_runner.go:130] > # image_volumes = "mkdir"
	I0531 19:08:08.817529   71907 command_runner.go:130] > # Temporary directory to use for storing big files
	I0531 19:08:08.817544   71907 command_runner.go:130] > # big_files_temporary_dir = ""
	I0531 19:08:08.817551   71907 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0531 19:08:08.817556   71907 command_runner.go:130] > # CNI plugins.
	I0531 19:08:08.817561   71907 command_runner.go:130] > [crio.network]
	I0531 19:08:08.817574   71907 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0531 19:08:08.817587   71907 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0531 19:08:08.817593   71907 command_runner.go:130] > # cni_default_network = ""
	I0531 19:08:08.817608   71907 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0531 19:08:08.817619   71907 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0531 19:08:08.817635   71907 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0531 19:08:08.817642   71907 command_runner.go:130] > # plugin_dirs = [
	I0531 19:08:08.817647   71907 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0531 19:08:08.817651   71907 command_runner.go:130] > # ]
	I0531 19:08:08.817658   71907 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0531 19:08:08.817668   71907 command_runner.go:130] > [crio.metrics]
	I0531 19:08:08.817675   71907 command_runner.go:130] > # Globally enable or disable metrics support.
	I0531 19:08:08.817683   71907 command_runner.go:130] > # enable_metrics = false
	I0531 19:08:08.817689   71907 command_runner.go:130] > # Specify enabled metrics collectors.
	I0531 19:08:08.817697   71907 command_runner.go:130] > # Per default all metrics are enabled.
	I0531 19:08:08.817708   71907 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0531 19:08:08.817725   71907 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0531 19:08:08.817741   71907 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0531 19:08:08.817754   71907 command_runner.go:130] > # metrics_collectors = [
	I0531 19:08:08.817760   71907 command_runner.go:130] > # 	"operations",
	I0531 19:08:08.817768   71907 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0531 19:08:08.817774   71907 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0531 19:08:08.817784   71907 command_runner.go:130] > # 	"operations_errors",
	I0531 19:08:08.817798   71907 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0531 19:08:08.817806   71907 command_runner.go:130] > # 	"image_pulls_by_name",
	I0531 19:08:08.817812   71907 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0531 19:08:08.817817   71907 command_runner.go:130] > # 	"image_pulls_failures",
	I0531 19:08:08.817822   71907 command_runner.go:130] > # 	"image_pulls_successes",
	I0531 19:08:08.817830   71907 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0531 19:08:08.817835   71907 command_runner.go:130] > # 	"image_layer_reuse",
	I0531 19:08:08.817843   71907 command_runner.go:130] > # 	"containers_oom_total",
	I0531 19:08:08.817848   71907 command_runner.go:130] > # 	"containers_oom",
	I0531 19:08:08.817855   71907 command_runner.go:130] > # 	"processes_defunct",
	I0531 19:08:08.817860   71907 command_runner.go:130] > # 	"operations_total",
	I0531 19:08:08.817871   71907 command_runner.go:130] > # 	"operations_latency_seconds",
	I0531 19:08:08.817887   71907 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0531 19:08:08.817898   71907 command_runner.go:130] > # 	"operations_errors_total",
	I0531 19:08:08.817904   71907 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0531 19:08:08.817909   71907 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0531 19:08:08.817923   71907 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0531 19:08:08.817931   71907 command_runner.go:130] > # 	"image_pulls_success_total",
	I0531 19:08:08.817939   71907 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0531 19:08:08.817948   71907 command_runner.go:130] > # 	"containers_oom_count_total",
	I0531 19:08:08.817957   71907 command_runner.go:130] > # ]
	I0531 19:08:08.817967   71907 command_runner.go:130] > # The port on which the metrics server will listen.
	I0531 19:08:08.817972   71907 command_runner.go:130] > # metrics_port = 9090
	I0531 19:08:08.817981   71907 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0531 19:08:08.817986   71907 command_runner.go:130] > # metrics_socket = ""
	I0531 19:08:08.817993   71907 command_runner.go:130] > # The certificate for the secure metrics server.
	I0531 19:08:08.818000   71907 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0531 19:08:08.818010   71907 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0531 19:08:08.818016   71907 command_runner.go:130] > # certificate on any modification event.
	I0531 19:08:08.818023   71907 command_runner.go:130] > # metrics_cert = ""
	I0531 19:08:08.818030   71907 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0531 19:08:08.818041   71907 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0531 19:08:08.818059   71907 command_runner.go:130] > # metrics_key = ""
	I0531 19:08:08.818081   71907 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0531 19:08:08.818089   71907 command_runner.go:130] > [crio.tracing]
	I0531 19:08:08.818114   71907 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0531 19:08:08.818129   71907 command_runner.go:130] > # enable_tracing = false
	I0531 19:08:08.818140   71907 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0531 19:08:08.818149   71907 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0531 19:08:08.818156   71907 command_runner.go:130] > # Number of samples to collect per million spans.
	I0531 19:08:08.818162   71907 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0531 19:08:08.818171   71907 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0531 19:08:08.818176   71907 command_runner.go:130] > [crio.stats]
	I0531 19:08:08.818186   71907 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0531 19:08:08.818196   71907 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0531 19:08:08.818204   71907 command_runner.go:130] > # stats_collection_period = 0
	I0531 19:08:08.818260   71907 command_runner.go:130] ! time="2023-05-31 19:08:08.806768243Z" level=info msg="Starting CRI-O, version: 1.24.5, git: b007cb6753d97de6218787b6894b0e3cc1dc8ecd(clean)"
	I0531 19:08:08.818300   71907 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0531 19:08:08.818401   71907 cni.go:84] Creating CNI manager for ""
	I0531 19:08:08.818416   71907 cni.go:136] 1 nodes found, recommending kindnet
	I0531 19:08:08.818431   71907 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0531 19:08:08.818460   71907 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.2 APIServerPort:8443 KubernetesVersion:v1.27.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-025078 NodeName:multinode-025078 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.58.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0531 19:08:08.818640   71907 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-025078"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.27.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0531 19:08:08.818775   71907 kubeadm.go:971] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.27.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=multinode-025078 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.58.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.27.2 ClusterName:multinode-025078 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0531 19:08:08.818866   71907 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.27.2
	I0531 19:08:08.828635   71907 command_runner.go:130] > kubeadm
	I0531 19:08:08.828651   71907 command_runner.go:130] > kubectl
	I0531 19:08:08.828656   71907 command_runner.go:130] > kubelet
	I0531 19:08:08.829734   71907 binaries.go:44] Found k8s binaries, skipping transfer
	I0531 19:08:08.829808   71907 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0531 19:08:08.840861   71907 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (426 bytes)
	I0531 19:08:08.862701   71907 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0531 19:08:08.884008   71907 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2097 bytes)
	I0531 19:08:08.904997   71907 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I0531 19:08:08.909370   71907 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0531 19:08:08.922441   71907 certs.go:56] Setting up /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/multinode-025078 for IP: 192.168.58.2
	I0531 19:08:08.922469   71907 certs.go:190] acquiring lock for shared ca certs: {Name:mk0147accf8b8da231d39646bdc89fced67451cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 19:08:08.922595   71907 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/16569-2389/.minikube/ca.key
	I0531 19:08:08.922633   71907 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/16569-2389/.minikube/proxy-client-ca.key
	I0531 19:08:08.922678   71907 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/multinode-025078/client.key
	I0531 19:08:08.922688   71907 crypto.go:68] Generating cert /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/multinode-025078/client.crt with IP's: []
	I0531 19:08:09.532961   71907 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/multinode-025078/client.crt ...
	I0531 19:08:09.532993   71907 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/multinode-025078/client.crt: {Name:mk9d9cf1c2fc987fda38889f5d0579e4055de141 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 19:08:09.533185   71907 crypto.go:164] Writing key to /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/multinode-025078/client.key ...
	I0531 19:08:09.533198   71907 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/multinode-025078/client.key: {Name:mk415a0e03f538306250d2fdfbd636bc5e6e3434 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 19:08:09.533967   71907 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/multinode-025078/apiserver.key.cee25041
	I0531 19:08:09.533987   71907 crypto.go:68] Generating cert /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/multinode-025078/apiserver.crt.cee25041 with IP's: [192.168.58.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0531 19:08:10.159700   71907 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/multinode-025078/apiserver.crt.cee25041 ...
	I0531 19:08:10.159730   71907 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/multinode-025078/apiserver.crt.cee25041: {Name:mk8a5be1882a01e5fd7ace852cf2c2ce63a826a7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 19:08:10.160527   71907 crypto.go:164] Writing key to /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/multinode-025078/apiserver.key.cee25041 ...
	I0531 19:08:10.160556   71907 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/multinode-025078/apiserver.key.cee25041: {Name:mkbf31ffcc48c296860604cdc857075f7d19bb96 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 19:08:10.160683   71907 certs.go:337] copying /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/multinode-025078/apiserver.crt.cee25041 -> /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/multinode-025078/apiserver.crt
	I0531 19:08:10.160794   71907 certs.go:341] copying /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/multinode-025078/apiserver.key.cee25041 -> /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/multinode-025078/apiserver.key
	I0531 19:08:10.160857   71907 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/multinode-025078/proxy-client.key
	I0531 19:08:10.160877   71907 crypto.go:68] Generating cert /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/multinode-025078/proxy-client.crt with IP's: []
	I0531 19:08:10.711625   71907 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/multinode-025078/proxy-client.crt ...
	I0531 19:08:10.711668   71907 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/multinode-025078/proxy-client.crt: {Name:mkfecc27a437f8488fd331ba43a3c13d8a71b079 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 19:08:10.712353   71907 crypto.go:164] Writing key to /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/multinode-025078/proxy-client.key ...
	I0531 19:08:10.712377   71907 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/multinode-025078/proxy-client.key: {Name:mk9a03e91bfe9beeee315e5d31463fa05b3a4cf6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 19:08:10.712909   71907 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/multinode-025078/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0531 19:08:10.712939   71907 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/multinode-025078/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0531 19:08:10.712956   71907 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/multinode-025078/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0531 19:08:10.713007   71907 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/multinode-025078/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0531 19:08:10.713023   71907 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16569-2389/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0531 19:08:10.713036   71907 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16569-2389/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0531 19:08:10.713051   71907 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16569-2389/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0531 19:08:10.713066   71907 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16569-2389/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0531 19:08:10.713129   71907 certs.go:437] found cert: /home/jenkins/minikube-integration/16569-2389/.minikube/certs/home/jenkins/minikube-integration/16569-2389/.minikube/certs/7804.pem (1338 bytes)
	W0531 19:08:10.713169   71907 certs.go:433] ignoring /home/jenkins/minikube-integration/16569-2389/.minikube/certs/home/jenkins/minikube-integration/16569-2389/.minikube/certs/7804_empty.pem, impossibly tiny 0 bytes
	I0531 19:08:10.713184   71907 certs.go:437] found cert: /home/jenkins/minikube-integration/16569-2389/.minikube/certs/home/jenkins/minikube-integration/16569-2389/.minikube/certs/ca-key.pem (1675 bytes)
	I0531 19:08:10.713210   71907 certs.go:437] found cert: /home/jenkins/minikube-integration/16569-2389/.minikube/certs/home/jenkins/minikube-integration/16569-2389/.minikube/certs/ca.pem (1078 bytes)
	I0531 19:08:10.713240   71907 certs.go:437] found cert: /home/jenkins/minikube-integration/16569-2389/.minikube/certs/home/jenkins/minikube-integration/16569-2389/.minikube/certs/cert.pem (1123 bytes)
	I0531 19:08:10.713273   71907 certs.go:437] found cert: /home/jenkins/minikube-integration/16569-2389/.minikube/certs/home/jenkins/minikube-integration/16569-2389/.minikube/certs/key.pem (1679 bytes)
	I0531 19:08:10.713320   71907 certs.go:437] found cert: /home/jenkins/minikube-integration/16569-2389/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/16569-2389/.minikube/files/etc/ssl/certs/78042.pem (1708 bytes)
	I0531 19:08:10.713353   71907 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16569-2389/.minikube/files/etc/ssl/certs/78042.pem -> /usr/share/ca-certificates/78042.pem
	I0531 19:08:10.713368   71907 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16569-2389/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0531 19:08:10.713379   71907 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16569-2389/.minikube/certs/7804.pem -> /usr/share/ca-certificates/7804.pem
	I0531 19:08:10.714041   71907 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/multinode-025078/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0531 19:08:10.744610   71907 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/multinode-025078/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0531 19:08:10.773464   71907 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/multinode-025078/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0531 19:08:10.802076   71907 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/multinode-025078/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0531 19:08:10.830444   71907 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16569-2389/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0531 19:08:10.858513   71907 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16569-2389/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0531 19:08:10.887323   71907 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16569-2389/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0531 19:08:10.915868   71907 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16569-2389/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0531 19:08:10.944366   71907 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16569-2389/.minikube/files/etc/ssl/certs/78042.pem --> /usr/share/ca-certificates/78042.pem (1708 bytes)
	I0531 19:08:10.974034   71907 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16569-2389/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0531 19:08:11.003037   71907 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16569-2389/.minikube/certs/7804.pem --> /usr/share/ca-certificates/7804.pem (1338 bytes)
	I0531 19:08:11.032498   71907 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0531 19:08:11.055124   71907 ssh_runner.go:195] Run: openssl version
	I0531 19:08:11.061897   71907 command_runner.go:130] > OpenSSL 3.0.2 15 Mar 2022 (Library: OpenSSL 3.0.2 15 Mar 2022)
	I0531 19:08:11.062265   71907 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0531 19:08:11.074202   71907 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0531 19:08:11.078679   71907 command_runner.go:130] > -rw-r--r-- 1 root root 1111 May 31 18:45 /usr/share/ca-certificates/minikubeCA.pem
	I0531 19:08:11.078896   71907 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 May 31 18:45 /usr/share/ca-certificates/minikubeCA.pem
	I0531 19:08:11.078958   71907 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0531 19:08:11.087380   71907 command_runner.go:130] > b5213941
	I0531 19:08:11.087745   71907 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0531 19:08:11.099621   71907 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7804.pem && ln -fs /usr/share/ca-certificates/7804.pem /etc/ssl/certs/7804.pem"
	I0531 19:08:11.111742   71907 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7804.pem
	I0531 19:08:11.116456   71907 command_runner.go:130] > -rw-r--r-- 1 root root 1338 May 31 18:52 /usr/share/ca-certificates/7804.pem
	I0531 19:08:11.116523   71907 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 May 31 18:52 /usr/share/ca-certificates/7804.pem
	I0531 19:08:11.116606   71907 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7804.pem
	I0531 19:08:11.125256   71907 command_runner.go:130] > 51391683
	I0531 19:08:11.125324   71907 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7804.pem /etc/ssl/certs/51391683.0"
	I0531 19:08:11.137123   71907 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/78042.pem && ln -fs /usr/share/ca-certificates/78042.pem /etc/ssl/certs/78042.pem"
	I0531 19:08:11.148959   71907 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/78042.pem
	I0531 19:08:11.153637   71907 command_runner.go:130] > -rw-r--r-- 1 root root 1708 May 31 18:52 /usr/share/ca-certificates/78042.pem
	I0531 19:08:11.153721   71907 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 May 31 18:52 /usr/share/ca-certificates/78042.pem
	I0531 19:08:11.153776   71907 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/78042.pem
	I0531 19:08:11.161986   71907 command_runner.go:130] > 3ec20f2e
	I0531 19:08:11.162395   71907 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/78042.pem /etc/ssl/certs/3ec20f2e.0"
	I0531 19:08:11.174055   71907 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0531 19:08:11.178401   71907 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0531 19:08:11.178440   71907 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0531 19:08:11.178479   71907 kubeadm.go:404] StartCluster: {Name:multinode-025078 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:multinode-025078 Namespace:default APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetr
ics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0531 19:08:11.178574   71907 cri.go:53] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0531 19:08:11.178635   71907 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0531 19:08:11.222279   71907 cri.go:88] found id: ""
	I0531 19:08:11.222389   71907 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0531 19:08:11.233209   71907 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/kubeadm-flags.env': No such file or directory
	I0531 19:08:11.233239   71907 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/config.yaml': No such file or directory
	I0531 19:08:11.233248   71907 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/etcd': No such file or directory
	I0531 19:08:11.233320   71907 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0531 19:08:11.244103   71907 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0531 19:08:11.244172   71907 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0531 19:08:11.255162   71907 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0531 19:08:11.255226   71907 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0531 19:08:11.255255   71907 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0531 19:08:11.255293   71907 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0531 19:08:11.255343   71907 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0531 19:08:11.255379   71907 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0531 19:08:11.307879   71907 kubeadm.go:322] [init] Using Kubernetes version: v1.27.2
	I0531 19:08:11.307909   71907 command_runner.go:130] > [init] Using Kubernetes version: v1.27.2
	I0531 19:08:11.308087   71907 kubeadm.go:322] [preflight] Running pre-flight checks
	I0531 19:08:11.308103   71907 command_runner.go:130] > [preflight] Running pre-flight checks
	I0531 19:08:11.353136   71907 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I0531 19:08:11.353160   71907 command_runner.go:130] > [preflight] The system verification failed. Printing the output from the verification:
	I0531 19:08:11.353221   71907 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1037-aws
	I0531 19:08:11.353232   71907 command_runner.go:130] > KERNEL_VERSION: 5.15.0-1037-aws
	I0531 19:08:11.353264   71907 kubeadm.go:322] OS: Linux
	I0531 19:08:11.353274   71907 command_runner.go:130] > OS: Linux
	I0531 19:08:11.353316   71907 kubeadm.go:322] CGROUPS_CPU: enabled
	I0531 19:08:11.353330   71907 command_runner.go:130] > CGROUPS_CPU: enabled
	I0531 19:08:11.353375   71907 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I0531 19:08:11.353384   71907 command_runner.go:130] > CGROUPS_CPUACCT: enabled
	I0531 19:08:11.353427   71907 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I0531 19:08:11.353435   71907 command_runner.go:130] > CGROUPS_CPUSET: enabled
	I0531 19:08:11.353487   71907 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I0531 19:08:11.353495   71907 command_runner.go:130] > CGROUPS_DEVICES: enabled
	I0531 19:08:11.353539   71907 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I0531 19:08:11.353550   71907 command_runner.go:130] > CGROUPS_FREEZER: enabled
	I0531 19:08:11.353597   71907 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I0531 19:08:11.353606   71907 command_runner.go:130] > CGROUPS_MEMORY: enabled
	I0531 19:08:11.353648   71907 kubeadm.go:322] CGROUPS_PIDS: enabled
	I0531 19:08:11.353656   71907 command_runner.go:130] > CGROUPS_PIDS: enabled
	I0531 19:08:11.353700   71907 kubeadm.go:322] CGROUPS_HUGETLB: enabled
	I0531 19:08:11.353709   71907 command_runner.go:130] > CGROUPS_HUGETLB: enabled
	I0531 19:08:11.353751   71907 kubeadm.go:322] CGROUPS_BLKIO: enabled
	I0531 19:08:11.353760   71907 command_runner.go:130] > CGROUPS_BLKIO: enabled
	I0531 19:08:11.435209   71907 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0531 19:08:11.435273   71907 command_runner.go:130] > [preflight] Pulling images required for setting up a Kubernetes cluster
	I0531 19:08:11.435414   71907 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0531 19:08:11.435440   71907 command_runner.go:130] > [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0531 19:08:11.435572   71907 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0531 19:08:11.435594   71907 command_runner.go:130] > [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0531 19:08:11.691586   71907 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0531 19:08:11.695248   71907 out.go:204]   - Generating certificates and keys ...
	I0531 19:08:11.692001   71907 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0531 19:08:11.695483   71907 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0531 19:08:11.695514   71907 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0531 19:08:11.695618   71907 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0531 19:08:11.695650   71907 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0531 19:08:12.620137   71907 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0531 19:08:12.620173   71907 command_runner.go:130] > [certs] Generating "apiserver-kubelet-client" certificate and key
	I0531 19:08:14.021620   71907 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0531 19:08:14.021648   71907 command_runner.go:130] > [certs] Generating "front-proxy-ca" certificate and key
	I0531 19:08:14.289883   71907 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0531 19:08:14.289912   71907 command_runner.go:130] > [certs] Generating "front-proxy-client" certificate and key
	I0531 19:08:15.126833   71907 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0531 19:08:15.126866   71907 command_runner.go:130] > [certs] Generating "etcd/ca" certificate and key
	I0531 19:08:15.904971   71907 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0531 19:08:15.904998   71907 command_runner.go:130] > [certs] Generating "etcd/server" certificate and key
	I0531 19:08:15.905339   71907 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [localhost multinode-025078] and IPs [192.168.58.2 127.0.0.1 ::1]
	I0531 19:08:15.905356   71907 command_runner.go:130] > [certs] etcd/server serving cert is signed for DNS names [localhost multinode-025078] and IPs [192.168.58.2 127.0.0.1 ::1]
	I0531 19:08:16.114318   71907 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0531 19:08:16.114342   71907 command_runner.go:130] > [certs] Generating "etcd/peer" certificate and key
	I0531 19:08:16.114723   71907 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-025078] and IPs [192.168.58.2 127.0.0.1 ::1]
	I0531 19:08:16.114755   71907 command_runner.go:130] > [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-025078] and IPs [192.168.58.2 127.0.0.1 ::1]
	I0531 19:08:16.604760   71907 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0531 19:08:16.604768   71907 command_runner.go:130] > [certs] Generating "etcd/healthcheck-client" certificate and key
	I0531 19:08:17.098592   71907 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0531 19:08:17.098617   71907 command_runner.go:130] > [certs] Generating "apiserver-etcd-client" certificate and key
	I0531 19:08:17.355590   71907 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0531 19:08:17.355619   71907 command_runner.go:130] > [certs] Generating "sa" key and public key
	I0531 19:08:17.355812   71907 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0531 19:08:17.355823   71907 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0531 19:08:17.497835   71907 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0531 19:08:17.497860   71907 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0531 19:08:18.673129   71907 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0531 19:08:18.673152   71907 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0531 19:08:19.479291   71907 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0531 19:08:19.479320   71907 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0531 19:08:19.740323   71907 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0531 19:08:19.740352   71907 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0531 19:08:19.751762   71907 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0531 19:08:19.751791   71907 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0531 19:08:19.753226   71907 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0531 19:08:19.753244   71907 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0531 19:08:19.753301   71907 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0531 19:08:19.753308   71907 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0531 19:08:19.844382   71907 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0531 19:08:19.846398   71907 out.go:204]   - Booting up control plane ...
	I0531 19:08:19.844483   71907 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0531 19:08:19.846499   71907 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0531 19:08:19.846518   71907 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0531 19:08:19.850907   71907 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0531 19:08:19.850930   71907 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0531 19:08:19.852670   71907 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0531 19:08:19.852693   71907 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0531 19:08:19.853992   71907 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0531 19:08:19.854010   71907 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0531 19:08:19.856289   71907 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0531 19:08:19.856309   71907 command_runner.go:130] > [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0531 19:08:27.858964   71907 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.002665 seconds
	I0531 19:08:27.858991   71907 command_runner.go:130] > [apiclient] All control plane components are healthy after 8.002665 seconds
	I0531 19:08:27.859091   71907 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0531 19:08:27.859097   71907 command_runner.go:130] > [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0531 19:08:27.872451   71907 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0531 19:08:27.872475   71907 command_runner.go:130] > [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0531 19:08:28.396331   71907 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0531 19:08:28.396354   71907 command_runner.go:130] > [upload-certs] Skipping phase. Please see --upload-certs
	I0531 19:08:28.396524   71907 kubeadm.go:322] [mark-control-plane] Marking the node multinode-025078 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0531 19:08:28.396530   71907 command_runner.go:130] > [mark-control-plane] Marking the node multinode-025078 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0531 19:08:28.908171   71907 kubeadm.go:322] [bootstrap-token] Using token: jyghof.sl1d6vgx3kd25fmm
	I0531 19:08:28.909937   71907 out.go:204]   - Configuring RBAC rules ...
	I0531 19:08:28.908270   71907 command_runner.go:130] > [bootstrap-token] Using token: jyghof.sl1d6vgx3kd25fmm
	I0531 19:08:28.910057   71907 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0531 19:08:28.910070   71907 command_runner.go:130] > [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0531 19:08:28.915352   71907 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0531 19:08:28.915376   71907 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0531 19:08:28.923254   71907 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0531 19:08:28.923280   71907 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0531 19:08:28.927510   71907 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0531 19:08:28.927535   71907 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0531 19:08:28.931479   71907 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0531 19:08:28.931504   71907 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0531 19:08:28.937348   71907 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0531 19:08:28.937371   71907 command_runner.go:130] > [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0531 19:08:28.951481   71907 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0531 19:08:28.951505   71907 command_runner.go:130] > [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0531 19:08:29.175081   71907 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0531 19:08:29.175105   71907 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0531 19:08:29.326673   71907 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0531 19:08:29.326700   71907 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0531 19:08:29.326707   71907 kubeadm.go:322] 
	I0531 19:08:29.326778   71907 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0531 19:08:29.326788   71907 command_runner.go:130] > Your Kubernetes control-plane has initialized successfully!
	I0531 19:08:29.326793   71907 kubeadm.go:322] 
	I0531 19:08:29.326865   71907 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0531 19:08:29.326873   71907 command_runner.go:130] > To start using your cluster, you need to run the following as a regular user:
	I0531 19:08:29.326878   71907 kubeadm.go:322] 
	I0531 19:08:29.326907   71907 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0531 19:08:29.326917   71907 command_runner.go:130] >   mkdir -p $HOME/.kube
	I0531 19:08:29.326971   71907 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0531 19:08:29.326978   71907 command_runner.go:130] >   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0531 19:08:29.327025   71907 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0531 19:08:29.327033   71907 command_runner.go:130] >   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0531 19:08:29.327037   71907 kubeadm.go:322] 
	I0531 19:08:29.327088   71907 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0531 19:08:29.327096   71907 command_runner.go:130] > Alternatively, if you are the root user, you can run:
	I0531 19:08:29.327101   71907 kubeadm.go:322] 
	I0531 19:08:29.327145   71907 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0531 19:08:29.327154   71907 command_runner.go:130] >   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0531 19:08:29.327158   71907 kubeadm.go:322] 
	I0531 19:08:29.327207   71907 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0531 19:08:29.327215   71907 command_runner.go:130] > You should now deploy a pod network to the cluster.
	I0531 19:08:29.327285   71907 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0531 19:08:29.327293   71907 command_runner.go:130] > Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0531 19:08:29.327356   71907 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0531 19:08:29.327364   71907 command_runner.go:130] >   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0531 19:08:29.327369   71907 kubeadm.go:322] 
	I0531 19:08:29.327448   71907 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0531 19:08:29.327456   71907 command_runner.go:130] > You can now join any number of control-plane nodes by copying certificate authorities
	I0531 19:08:29.327527   71907 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0531 19:08:29.327535   71907 command_runner.go:130] > and service account keys on each node and then running the following as root:
	I0531 19:08:29.327539   71907 kubeadm.go:322] 
	I0531 19:08:29.327617   71907 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token jyghof.sl1d6vgx3kd25fmm \
	I0531 19:08:29.327625   71907 command_runner.go:130] >   kubeadm join control-plane.minikube.internal:8443 --token jyghof.sl1d6vgx3kd25fmm \
	I0531 19:08:29.327721   71907 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:8cab06d0df96335aa364fa490e6822dfb6067993e3a4e02e6ce54947f37c8db2 \
	I0531 19:08:29.327728   71907 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:8cab06d0df96335aa364fa490e6822dfb6067993e3a4e02e6ce54947f37c8db2 \
	I0531 19:08:29.327747   71907 kubeadm.go:322] 	--control-plane 
	I0531 19:08:29.327756   71907 command_runner.go:130] > 	--control-plane 
	I0531 19:08:29.327760   71907 kubeadm.go:322] 
	I0531 19:08:29.327839   71907 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0531 19:08:29.327848   71907 command_runner.go:130] > Then you can join any number of worker nodes by running the following on each as root:
	I0531 19:08:29.327852   71907 kubeadm.go:322] 
	I0531 19:08:29.327929   71907 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token jyghof.sl1d6vgx3kd25fmm \
	I0531 19:08:29.327938   71907 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token jyghof.sl1d6vgx3kd25fmm \
	I0531 19:08:29.328033   71907 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:8cab06d0df96335aa364fa490e6822dfb6067993e3a4e02e6ce54947f37c8db2 
	I0531 19:08:29.328041   71907 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:8cab06d0df96335aa364fa490e6822dfb6067993e3a4e02e6ce54947f37c8db2 
	I0531 19:08:29.332029   71907 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1037-aws\n", err: exit status 1
	I0531 19:08:29.332054   71907 command_runner.go:130] ! 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1037-aws\n", err: exit status 1
	I0531 19:08:29.332154   71907 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0531 19:08:29.332164   71907 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0531 19:08:29.332318   71907 kubeadm.go:322] W0531 19:08:11.435134    1083 images.go:80] could not find officially supported version of etcd for Kubernetes v1.27.2, falling back to the nearest etcd version (3.5.7-0)
	I0531 19:08:29.332327   71907 command_runner.go:130] ! W0531 19:08:11.435134    1083 images.go:80] could not find officially supported version of etcd for Kubernetes v1.27.2, falling back to the nearest etcd version (3.5.7-0)
	I0531 19:08:29.332480   71907 kubeadm.go:322] W0531 19:08:19.853161    1083 images.go:80] could not find officially supported version of etcd for Kubernetes v1.27.2, falling back to the nearest etcd version (3.5.7-0)
	I0531 19:08:29.332488   71907 command_runner.go:130] ! W0531 19:08:19.853161    1083 images.go:80] could not find officially supported version of etcd for Kubernetes v1.27.2, falling back to the nearest etcd version (3.5.7-0)
	I0531 19:08:29.332501   71907 cni.go:84] Creating CNI manager for ""
	I0531 19:08:29.332528   71907 cni.go:136] 1 nodes found, recommending kindnet
	I0531 19:08:29.334985   71907 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0531 19:08:29.336689   71907 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0531 19:08:29.341735   71907 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0531 19:08:29.341759   71907 command_runner.go:130] >   Size: 3841245   	Blocks: 7504       IO Block: 4096   regular file
	I0531 19:08:29.341767   71907 command_runner.go:130] > Device: 36h/54d	Inode: 1306535     Links: 1
	I0531 19:08:29.341775   71907 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0531 19:08:29.341783   71907 command_runner.go:130] > Access: 2023-05-09 19:54:42.000000000 +0000
	I0531 19:08:29.341790   71907 command_runner.go:130] > Modify: 2023-05-09 19:54:42.000000000 +0000
	I0531 19:08:29.341800   71907 command_runner.go:130] > Change: 2023-05-31 18:44:35.901126368 +0000
	I0531 19:08:29.341806   71907 command_runner.go:130] >  Birth: 2023-05-31 18:44:35.857126755 +0000
	I0531 19:08:29.351625   71907 cni.go:181] applying CNI manifest using /var/lib/minikube/binaries/v1.27.2/kubectl ...
	I0531 19:08:29.351691   71907 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0531 19:08:29.388630   71907 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0531 19:08:30.306422   71907 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet created
	I0531 19:08:30.315421   71907 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet created
	I0531 19:08:30.325740   71907 command_runner.go:130] > serviceaccount/kindnet created
	I0531 19:08:30.337349   71907 command_runner.go:130] > daemonset.apps/kindnet created
	I0531 19:08:30.342580   71907 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0531 19:08:30.342656   71907 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 19:08:30.342692   71907 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl label nodes minikube.k8s.io/version=v1.30.1 minikube.k8s.io/commit=7022875d4a054c2d518e5e5a7b9d500799d50140 minikube.k8s.io/name=multinode-025078 minikube.k8s.io/updated_at=2023_05_31T19_08_30_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 19:08:30.540501   71907 command_runner.go:130] > node/multinode-025078 labeled
	I0531 19:08:30.544016   71907 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/minikube-rbac created
	I0531 19:08:30.544109   71907 command_runner.go:130] > -16
	I0531 19:08:30.544134   71907 ops.go:34] apiserver oom_adj: -16
	I0531 19:08:30.544114   71907 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 19:08:30.645934   71907 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0531 19:08:31.150799   71907 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 19:08:31.242385   71907 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0531 19:08:31.650182   71907 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 19:08:31.739146   71907 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0531 19:08:32.151090   71907 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 19:08:32.249499   71907 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0531 19:08:32.650520   71907 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 19:08:32.743961   71907 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0531 19:08:33.150831   71907 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 19:08:33.239564   71907 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0531 19:08:33.651195   71907 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 19:08:33.743211   71907 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0531 19:08:34.150747   71907 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 19:08:34.254858   71907 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0531 19:08:34.650294   71907 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 19:08:34.739424   71907 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0531 19:08:35.150509   71907 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 19:08:35.246405   71907 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0531 19:08:35.650685   71907 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 19:08:35.744577   71907 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0531 19:08:36.150241   71907 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 19:08:36.241458   71907 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0531 19:08:36.651114   71907 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 19:08:36.739647   71907 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0531 19:08:37.150968   71907 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 19:08:37.239517   71907 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0531 19:08:37.651168   71907 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 19:08:37.740839   71907 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0531 19:08:38.151134   71907 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 19:08:38.250114   71907 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0531 19:08:38.650584   71907 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 19:08:38.742187   71907 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0531 19:08:39.150930   71907 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 19:08:39.245397   71907 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0531 19:08:39.650976   71907 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 19:08:39.764803   71907 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0531 19:08:40.150296   71907 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 19:08:40.250319   71907 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0531 19:08:40.651021   71907 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 19:08:40.743821   71907 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0531 19:08:41.150724   71907 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 19:08:41.263104   71907 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0531 19:08:41.650346   71907 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 19:08:41.740963   71907 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0531 19:08:42.150597   71907 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 19:08:42.252003   71907 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0531 19:08:42.650715   71907 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 19:08:42.769726   71907 command_runner.go:130] > NAME      SECRETS   AGE
	I0531 19:08:42.769744   71907 command_runner.go:130] > default   0         0s
	I0531 19:08:42.769763   71907 kubeadm.go:1076] duration metric: took 12.427181043s to wait for elevateKubeSystemPrivileges.
	I0531 19:08:42.769775   71907 kubeadm.go:406] StartCluster complete in 31.59130093s
	I0531 19:08:42.769789   71907 settings.go:142] acquiring lock: {Name:mk7112454687e7bda5617b0aa762b583179f0f78 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 19:08:42.769848   71907 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/16569-2389/kubeconfig
	I0531 19:08:42.770495   71907 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16569-2389/kubeconfig: {Name:mk0c7b1a200a0a97aa7bf4307790fd99336ec425 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 19:08:42.771005   71907 loader.go:373] Config loaded from file:  /home/jenkins/minikube-integration/16569-2389/kubeconfig
	I0531 19:08:42.771267   71907 kapi.go:59] client config for multinode-025078: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/16569-2389/.minikube/profiles/multinode-025078/client.crt", KeyFile:"/home/jenkins/minikube-integration/16569-2389/.minikube/profiles/multinode-025078/client.key", CAFile:"/home/jenkins/minikube-integration/16569-2389/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProt
os:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x13dde10), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0531 19:08:42.772335   71907 round_trippers.go:463] GET https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0531 19:08:42.772345   71907 round_trippers.go:469] Request Headers:
	I0531 19:08:42.772355   71907 round_trippers.go:473]     Accept: application/json, */*
	I0531 19:08:42.772362   71907 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0531 19:08:42.772582   71907 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0531 19:08:42.772841   71907 config.go:182] Loaded profile config "multinode-025078": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.2
	I0531 19:08:42.772871   71907 addons.go:496] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0531 19:08:42.772926   71907 addons.go:66] Setting storage-provisioner=true in profile "multinode-025078"
	I0531 19:08:42.772943   71907 addons.go:228] Setting addon storage-provisioner=true in "multinode-025078"
	I0531 19:08:42.772978   71907 host.go:66] Checking if "multinode-025078" exists ...
	I0531 19:08:42.773387   71907 cli_runner.go:164] Run: docker container inspect multinode-025078 --format={{.State.Status}}
	I0531 19:08:42.774067   71907 cert_rotation.go:137] Starting client certificate rotation controller
	I0531 19:08:42.774103   71907 addons.go:66] Setting default-storageclass=true in profile "multinode-025078"
	I0531 19:08:42.774117   71907 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-025078"
	I0531 19:08:42.774378   71907 cli_runner.go:164] Run: docker container inspect multinode-025078 --format={{.State.Status}}
	I0531 19:08:42.818143   71907 round_trippers.go:574] Response Status: 200 OK in 45 milliseconds
	I0531 19:08:42.818163   71907 round_trippers.go:577] Response Headers:
	I0531 19:08:42.818171   71907 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9616b646-f7d3-407a-9b78-ebb52861f5ac
	I0531 19:08:42.818179   71907 round_trippers.go:580]     Content-Length: 291
	I0531 19:08:42.818185   71907 round_trippers.go:580]     Date: Wed, 31 May 2023 19:08:42 GMT
	I0531 19:08:42.818192   71907 round_trippers.go:580]     Audit-Id: 328843f5-f68b-4c86-a30f-e1db731bbe28
	I0531 19:08:42.818198   71907 round_trippers.go:580]     Cache-Control: no-cache, private
	I0531 19:08:42.818204   71907 round_trippers.go:580]     Content-Type: application/json
	I0531 19:08:42.818211   71907 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61ef383b-79fe-44e8-8f9c-d13c8cd9f602
	I0531 19:08:42.819133   71907 request.go:1188] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"d8cb62c2-6f96-4520-9400-e74374977fc2","resourceVersion":"260","creationTimestamp":"2023-05-31T19:08:29Z"},"spec":{"replicas":2},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I0531 19:08:42.819558   71907 request.go:1188] Request Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"d8cb62c2-6f96-4520-9400-e74374977fc2","resourceVersion":"260","creationTimestamp":"2023-05-31T19:08:29Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I0531 19:08:42.819612   71907 round_trippers.go:463] PUT https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0531 19:08:42.819626   71907 round_trippers.go:469] Request Headers:
	I0531 19:08:42.819636   71907 round_trippers.go:473]     Accept: application/json, */*
	I0531 19:08:42.819650   71907 round_trippers.go:473]     Content-Type: application/json
	I0531 19:08:42.819657   71907 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0531 19:08:42.824084   71907 loader.go:373] Config loaded from file:  /home/jenkins/minikube-integration/16569-2389/kubeconfig
	I0531 19:08:42.824343   71907 kapi.go:59] client config for multinode-025078: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/16569-2389/.minikube/profiles/multinode-025078/client.crt", KeyFile:"/home/jenkins/minikube-integration/16569-2389/.minikube/profiles/multinode-025078/client.key", CAFile:"/home/jenkins/minikube-integration/16569-2389/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProt
os:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x13dde10), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0531 19:08:42.824690   71907 round_trippers.go:463] GET https://192.168.58.2:8443/apis/storage.k8s.io/v1/storageclasses
	I0531 19:08:42.824705   71907 round_trippers.go:469] Request Headers:
	I0531 19:08:42.824716   71907 round_trippers.go:473]     Accept: application/json, */*
	I0531 19:08:42.824730   71907 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0531 19:08:42.837230   71907 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0531 19:08:42.839498   71907 addons.go:420] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0531 19:08:42.839519   71907 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0531 19:08:42.839583   71907 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-025078
	I0531 19:08:42.836882   71907 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0531 19:08:42.839815   71907 round_trippers.go:577] Response Headers:
	I0531 19:08:42.839826   71907 round_trippers.go:580]     Cache-Control: no-cache, private
	I0531 19:08:42.839839   71907 round_trippers.go:580]     Content-Type: application/json
	I0531 19:08:42.839846   71907 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61ef383b-79fe-44e8-8f9c-d13c8cd9f602
	I0531 19:08:42.839853   71907 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9616b646-f7d3-407a-9b78-ebb52861f5ac
	I0531 19:08:42.839860   71907 round_trippers.go:580]     Content-Length: 109
	I0531 19:08:42.839867   71907 round_trippers.go:580]     Date: Wed, 31 May 2023 19:08:42 GMT
	I0531 19:08:42.839873   71907 round_trippers.go:580]     Audit-Id: d543444b-dcf3-40eb-b845-2f6c05891963
	I0531 19:08:42.839895   71907 request.go:1188] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"360"},"items":[]}
	I0531 19:08:42.840143   71907 addons.go:228] Setting addon default-storageclass=true in "multinode-025078"
	I0531 19:08:42.840171   71907 host.go:66] Checking if "multinode-025078" exists ...
	I0531 19:08:42.840579   71907 cli_runner.go:164] Run: docker container inspect multinode-025078 --format={{.State.Status}}
	I0531 19:08:42.865148   71907 round_trippers.go:574] Response Status: 200 OK in 45 milliseconds
	I0531 19:08:42.865169   71907 round_trippers.go:577] Response Headers:
	I0531 19:08:42.865178   71907 round_trippers.go:580]     Audit-Id: 9a5da2cb-d060-4104-99bb-35c4639043dc
	I0531 19:08:42.865185   71907 round_trippers.go:580]     Cache-Control: no-cache, private
	I0531 19:08:42.865192   71907 round_trippers.go:580]     Content-Type: application/json
	I0531 19:08:42.865204   71907 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61ef383b-79fe-44e8-8f9c-d13c8cd9f602
	I0531 19:08:42.865211   71907 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9616b646-f7d3-407a-9b78-ebb52861f5ac
	I0531 19:08:42.865218   71907 round_trippers.go:580]     Content-Length: 291
	I0531 19:08:42.865225   71907 round_trippers.go:580]     Date: Wed, 31 May 2023 19:08:42 GMT
	I0531 19:08:42.866813   71907 request.go:1188] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"d8cb62c2-6f96-4520-9400-e74374977fc2","resourceVersion":"362","creationTimestamp":"2023-05-31T19:08:29Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I0531 19:08:42.880097   71907 addons.go:420] installing /etc/kubernetes/addons/storageclass.yaml
	I0531 19:08:42.880117   71907 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0531 19:08:42.880178   71907 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-025078
	I0531 19:08:42.895911   71907 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32847 SSHKeyPath:/home/jenkins/minikube-integration/16569-2389/.minikube/machines/multinode-025078/id_rsa Username:docker}
	I0531 19:08:42.954324   71907 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32847 SSHKeyPath:/home/jenkins/minikube-integration/16569-2389/.minikube/machines/multinode-025078/id_rsa Username:docker}
	I0531 19:08:43.037157   71907 command_runner.go:130] > apiVersion: v1
	I0531 19:08:43.037211   71907 command_runner.go:130] > data:
	I0531 19:08:43.037230   71907 command_runner.go:130] >   Corefile: |
	I0531 19:08:43.037252   71907 command_runner.go:130] >     .:53 {
	I0531 19:08:43.037275   71907 command_runner.go:130] >         errors
	I0531 19:08:43.037304   71907 command_runner.go:130] >         health {
	I0531 19:08:43.037326   71907 command_runner.go:130] >            lameduck 5s
	I0531 19:08:43.037346   71907 command_runner.go:130] >         }
	I0531 19:08:43.037368   71907 command_runner.go:130] >         ready
	I0531 19:08:43.037398   71907 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I0531 19:08:43.037420   71907 command_runner.go:130] >            pods insecure
	I0531 19:08:43.037442   71907 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I0531 19:08:43.037477   71907 command_runner.go:130] >            ttl 30
	I0531 19:08:43.037507   71907 command_runner.go:130] >         }
	I0531 19:08:43.037526   71907 command_runner.go:130] >         prometheus :9153
	I0531 19:08:43.037552   71907 command_runner.go:130] >         forward . /etc/resolv.conf {
	I0531 19:08:43.037572   71907 command_runner.go:130] >            max_concurrent 1000
	I0531 19:08:43.037594   71907 command_runner.go:130] >         }
	I0531 19:08:43.037616   71907 command_runner.go:130] >         cache 30
	I0531 19:08:43.037641   71907 command_runner.go:130] >         loop
	I0531 19:08:43.037662   71907 command_runner.go:130] >         reload
	I0531 19:08:43.037684   71907 command_runner.go:130] >         loadbalance
	I0531 19:08:43.037712   71907 command_runner.go:130] >     }
	I0531 19:08:43.037733   71907 command_runner.go:130] > kind: ConfigMap
	I0531 19:08:43.037755   71907 command_runner.go:130] > metadata:
	I0531 19:08:43.037780   71907 command_runner.go:130] >   creationTimestamp: "2023-05-31T19:08:29Z"
	I0531 19:08:43.037808   71907 command_runner.go:130] >   name: coredns
	I0531 19:08:43.037831   71907 command_runner.go:130] >   namespace: kube-system
	I0531 19:08:43.037854   71907 command_runner.go:130] >   resourceVersion: "256"
	I0531 19:08:43.037884   71907 command_runner.go:130] >   uid: e0aa6a76-c180-4e17-9182-ad1c72f8c373
	I0531 19:08:43.041067   71907 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.58.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.27.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0531 19:08:43.137904   71907 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0531 19:08:43.173480   71907 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0531 19:08:43.367736   71907 round_trippers.go:463] GET https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0531 19:08:43.367754   71907 round_trippers.go:469] Request Headers:
	I0531 19:08:43.367764   71907 round_trippers.go:473]     Accept: application/json, */*
	I0531 19:08:43.367771   71907 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0531 19:08:43.441660   71907 round_trippers.go:574] Response Status: 200 OK in 73 milliseconds
	I0531 19:08:43.441682   71907 round_trippers.go:577] Response Headers:
	I0531 19:08:43.441690   71907 round_trippers.go:580]     Content-Type: application/json
	I0531 19:08:43.441697   71907 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61ef383b-79fe-44e8-8f9c-d13c8cd9f602
	I0531 19:08:43.441704   71907 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9616b646-f7d3-407a-9b78-ebb52861f5ac
	I0531 19:08:43.441710   71907 round_trippers.go:580]     Content-Length: 291
	I0531 19:08:43.441717   71907 round_trippers.go:580]     Date: Wed, 31 May 2023 19:08:43 GMT
	I0531 19:08:43.441724   71907 round_trippers.go:580]     Audit-Id: 03725731-b504-4a2d-9afe-365d5d7576fd
	I0531 19:08:43.441730   71907 round_trippers.go:580]     Cache-Control: no-cache, private
	I0531 19:08:43.446090   71907 request.go:1188] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"d8cb62c2-6f96-4520-9400-e74374977fc2","resourceVersion":"379","creationTimestamp":"2023-05-31T19:08:29Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I0531 19:08:43.446248   71907 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-025078" context rescaled to 1 replicas
	I0531 19:08:43.446298   71907 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0531 19:08:43.450894   71907 out.go:177] * Verifying Kubernetes components...
	I0531 19:08:43.452853   71907 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0531 19:08:43.766187   71907 command_runner.go:130] > configmap/coredns replaced
	I0531 19:08:43.771661   71907 start.go:916] {"host.minikube.internal": 192.168.58.1} host record injected into CoreDNS's ConfigMap
	I0531 19:08:44.072327   71907 command_runner.go:130] > serviceaccount/storage-provisioner created
	I0531 19:08:44.081243   71907 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner created
	I0531 19:08:44.090819   71907 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0531 19:08:44.099640   71907 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0531 19:08:44.110324   71907 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath created
	I0531 19:08:44.123487   71907 command_runner.go:130] > pod/storage-provisioner created
	I0531 19:08:44.128378   71907 command_runner.go:130] > storageclass.storage.k8s.io/standard created
	I0531 19:08:44.130774   71907 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0531 19:08:44.129108   71907 loader.go:373] Config loaded from file:  /home/jenkins/minikube-integration/16569-2389/kubeconfig
	I0531 19:08:44.132910   71907 kapi.go:59] client config for multinode-025078: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/16569-2389/.minikube/profiles/multinode-025078/client.crt", KeyFile:"/home/jenkins/minikube-integration/16569-2389/.minikube/profiles/multinode-025078/client.key", CAFile:"/home/jenkins/minikube-integration/16569-2389/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProt
os:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x13dde10), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0531 19:08:44.133245   71907 node_ready.go:35] waiting up to 6m0s for node "multinode-025078" to be "Ready" ...
	I0531 19:08:44.133337   71907 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-025078
	I0531 19:08:44.133366   71907 round_trippers.go:469] Request Headers:
	I0531 19:08:44.133390   71907 round_trippers.go:473]     Accept: application/json, */*
	I0531 19:08:44.133413   71907 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0531 19:08:44.133563   71907 addons.go:499] enable addons completed in 1.360689909s: enabled=[storage-provisioner default-storageclass]
	I0531 19:08:44.143934   71907 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0531 19:08:44.143954   71907 round_trippers.go:577] Response Headers:
	I0531 19:08:44.143963   71907 round_trippers.go:580]     Cache-Control: no-cache, private
	I0531 19:08:44.143971   71907 round_trippers.go:580]     Content-Type: application/json
	I0531 19:08:44.143977   71907 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61ef383b-79fe-44e8-8f9c-d13c8cd9f602
	I0531 19:08:44.143984   71907 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9616b646-f7d3-407a-9b78-ebb52861f5ac
	I0531 19:08:44.143991   71907 round_trippers.go:580]     Date: Wed, 31 May 2023 19:08:44 GMT
	I0531 19:08:44.143998   71907 round_trippers.go:580]     Audit-Id: 64aeee11-2658-4ebd-bd8f-62733a0ca005
	I0531 19:08:44.144553   71907 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025078","uid":"527e40a2-2af7-4de3-b462-a471bac33c44","resourceVersion":"336","creationTimestamp":"2023-05-31T19:08:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-025078","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7022875d4a054c2d518e5e5a7b9d500799d50140","minikube.k8s.io/name":"multinode-025078","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_05_31T19_08_30_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-05-31T19:08:26Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0531 19:08:44.645958   71907 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-025078
	I0531 19:08:44.645989   71907 round_trippers.go:469] Request Headers:
	I0531 19:08:44.646001   71907 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0531 19:08:44.646010   71907 round_trippers.go:473]     Accept: application/json, */*
	I0531 19:08:44.649517   71907 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0531 19:08:44.649577   71907 round_trippers.go:577] Response Headers:
	I0531 19:08:44.649586   71907 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61ef383b-79fe-44e8-8f9c-d13c8cd9f602
	I0531 19:08:44.649593   71907 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9616b646-f7d3-407a-9b78-ebb52861f5ac
	I0531 19:08:44.649608   71907 round_trippers.go:580]     Date: Wed, 31 May 2023 19:08:44 GMT
	I0531 19:08:44.649616   71907 round_trippers.go:580]     Audit-Id: b6b58059-da7e-467a-a1bd-6124e9fe0e7b
	I0531 19:08:44.649625   71907 round_trippers.go:580]     Cache-Control: no-cache, private
	I0531 19:08:44.649633   71907 round_trippers.go:580]     Content-Type: application/json
	I0531 19:08:44.649768   71907 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025078","uid":"527e40a2-2af7-4de3-b462-a471bac33c44","resourceVersion":"421","creationTimestamp":"2023-05-31T19:08:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-025078","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7022875d4a054c2d518e5e5a7b9d500799d50140","minikube.k8s.io/name":"multinode-025078","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_05_31T19_08_30_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-05-31T19:08:26Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I0531 19:08:44.650191   71907 node_ready.go:49] node "multinode-025078" has status "Ready":"True"
	I0531 19:08:44.650210   71907 node_ready.go:38] duration metric: took 516.928746ms waiting for node "multinode-025078" to be "Ready" ...
	I0531 19:08:44.650219   71907 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0531 19:08:44.650352   71907 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I0531 19:08:44.650384   71907 round_trippers.go:469] Request Headers:
	I0531 19:08:44.650393   71907 round_trippers.go:473]     Accept: application/json, */*
	I0531 19:08:44.650400   71907 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0531 19:08:44.655262   71907 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0531 19:08:44.655293   71907 round_trippers.go:577] Response Headers:
	I0531 19:08:44.655303   71907 round_trippers.go:580]     Audit-Id: 15574993-1063-4547-8cc4-13b2a097591e
	I0531 19:08:44.655314   71907 round_trippers.go:580]     Cache-Control: no-cache, private
	I0531 19:08:44.655321   71907 round_trippers.go:580]     Content-Type: application/json
	I0531 19:08:44.655329   71907 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61ef383b-79fe-44e8-8f9c-d13c8cd9f602
	I0531 19:08:44.655336   71907 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9616b646-f7d3-407a-9b78-ebb52861f5ac
	I0531 19:08:44.655346   71907 round_trippers.go:580]     Date: Wed, 31 May 2023 19:08:44 GMT
	I0531 19:08:44.656510   71907 request.go:1188] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"424"},"items":[{"metadata":{"name":"coredns-5d78c9869d-hhw4h","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"3f927f21-c6c7-43a2-a635-4fd3c672172d","resourceVersion":"424","creationTimestamp":"2023-05-31T19:08:43Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"463ad278-b9f3-4d58-8542-8ec925fef61a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-05-31T19:08:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"463ad278-b9f3-4d58-8542-8ec925fef61a\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 55049 chars]
	I0531 19:08:44.660639   71907 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5d78c9869d-hhw4h" in "kube-system" namespace to be "Ready" ...
	I0531 19:08:44.660729   71907 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-hhw4h
	I0531 19:08:44.660744   71907 round_trippers.go:469] Request Headers:
	I0531 19:08:44.660754   71907 round_trippers.go:473]     Accept: application/json, */*
	I0531 19:08:44.660765   71907 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0531 19:08:44.663365   71907 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0531 19:08:44.663384   71907 round_trippers.go:577] Response Headers:
	I0531 19:08:44.663393   71907 round_trippers.go:580]     Cache-Control: no-cache, private
	I0531 19:08:44.663400   71907 round_trippers.go:580]     Content-Type: application/json
	I0531 19:08:44.663407   71907 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61ef383b-79fe-44e8-8f9c-d13c8cd9f602
	I0531 19:08:44.663414   71907 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9616b646-f7d3-407a-9b78-ebb52861f5ac
	I0531 19:08:44.663421   71907 round_trippers.go:580]     Date: Wed, 31 May 2023 19:08:44 GMT
	I0531 19:08:44.663427   71907 round_trippers.go:580]     Audit-Id: e4348aa0-61e0-46b5-9588-6bbf85906cf4
	I0531 19:08:44.663568   71907 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-hhw4h","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"3f927f21-c6c7-43a2-a635-4fd3c672172d","resourceVersion":"424","creationTimestamp":"2023-05-31T19:08:43Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"463ad278-b9f3-4d58-8542-8ec925fef61a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-05-31T19:08:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"463ad278-b9f3-4d58-8542-8ec925fef61a\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6150 chars]
	I0531 19:08:44.664101   71907 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-025078
	I0531 19:08:44.664119   71907 round_trippers.go:469] Request Headers:
	I0531 19:08:44.664129   71907 round_trippers.go:473]     Accept: application/json, */*
	I0531 19:08:44.664141   71907 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0531 19:08:44.666535   71907 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0531 19:08:44.666554   71907 round_trippers.go:577] Response Headers:
	I0531 19:08:44.666562   71907 round_trippers.go:580]     Date: Wed, 31 May 2023 19:08:44 GMT
	I0531 19:08:44.666569   71907 round_trippers.go:580]     Audit-Id: 3e73c757-c25c-4a49-8737-7be9442d84ad
	I0531 19:08:44.666576   71907 round_trippers.go:580]     Cache-Control: no-cache, private
	I0531 19:08:44.666582   71907 round_trippers.go:580]     Content-Type: application/json
	I0531 19:08:44.666590   71907 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61ef383b-79fe-44e8-8f9c-d13c8cd9f602
	I0531 19:08:44.666596   71907 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9616b646-f7d3-407a-9b78-ebb52861f5ac
	I0531 19:08:44.666754   71907 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025078","uid":"527e40a2-2af7-4de3-b462-a471bac33c44","resourceVersion":"421","creationTimestamp":"2023-05-31T19:08:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-025078","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7022875d4a054c2d518e5e5a7b9d500799d50140","minikube.k8s.io/name":"multinode-025078","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_05_31T19_08_30_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-05-31T19:08:26Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I0531 19:08:45.168001   71907 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-hhw4h
	I0531 19:08:45.168039   71907 round_trippers.go:469] Request Headers:
	I0531 19:08:45.168054   71907 round_trippers.go:473]     Accept: application/json, */*
	I0531 19:08:45.168063   71907 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0531 19:08:45.171068   71907 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0531 19:08:45.171146   71907 round_trippers.go:577] Response Headers:
	I0531 19:08:45.171169   71907 round_trippers.go:580]     Cache-Control: no-cache, private
	I0531 19:08:45.171194   71907 round_trippers.go:580]     Content-Type: application/json
	I0531 19:08:45.171229   71907 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61ef383b-79fe-44e8-8f9c-d13c8cd9f602
	I0531 19:08:45.171246   71907 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9616b646-f7d3-407a-9b78-ebb52861f5ac
	I0531 19:08:45.171257   71907 round_trippers.go:580]     Date: Wed, 31 May 2023 19:08:45 GMT
	I0531 19:08:45.171264   71907 round_trippers.go:580]     Audit-Id: 7cbd945a-de21-4ae2-9951-5f73a0b6143a
	I0531 19:08:45.171430   71907 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-hhw4h","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"3f927f21-c6c7-43a2-a635-4fd3c672172d","resourceVersion":"424","creationTimestamp":"2023-05-31T19:08:43Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"463ad278-b9f3-4d58-8542-8ec925fef61a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-05-31T19:08:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"463ad278-b9f3-4d58-8542-8ec925fef61a\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6150 chars]
	I0531 19:08:45.171991   71907 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-025078
	I0531 19:08:45.172007   71907 round_trippers.go:469] Request Headers:
	I0531 19:08:45.172017   71907 round_trippers.go:473]     Accept: application/json, */*
	I0531 19:08:45.172025   71907 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0531 19:08:45.174883   71907 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0531 19:08:45.174955   71907 round_trippers.go:577] Response Headers:
	I0531 19:08:45.174986   71907 round_trippers.go:580]     Cache-Control: no-cache, private
	I0531 19:08:45.175009   71907 round_trippers.go:580]     Content-Type: application/json
	I0531 19:08:45.175041   71907 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61ef383b-79fe-44e8-8f9c-d13c8cd9f602
	I0531 19:08:45.175069   71907 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9616b646-f7d3-407a-9b78-ebb52861f5ac
	I0531 19:08:45.175093   71907 round_trippers.go:580]     Date: Wed, 31 May 2023 19:08:45 GMT
	I0531 19:08:45.175129   71907 round_trippers.go:580]     Audit-Id: 7a6f9ccb-35f5-4097-979a-818166d31f62
	I0531 19:08:45.175308   71907 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025078","uid":"527e40a2-2af7-4de3-b462-a471bac33c44","resourceVersion":"421","creationTimestamp":"2023-05-31T19:08:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-025078","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7022875d4a054c2d518e5e5a7b9d500799d50140","minikube.k8s.io/name":"multinode-025078","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_05_31T19_08_30_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-05-31T19:08:26Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I0531 19:08:45.668024   71907 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-hhw4h
	I0531 19:08:45.668104   71907 round_trippers.go:469] Request Headers:
	I0531 19:08:45.668147   71907 round_trippers.go:473]     Accept: application/json, */*
	I0531 19:08:45.668196   71907 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0531 19:08:45.671430   71907 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0531 19:08:45.671455   71907 round_trippers.go:577] Response Headers:
	I0531 19:08:45.671465   71907 round_trippers.go:580]     Date: Wed, 31 May 2023 19:08:45 GMT
	I0531 19:08:45.671472   71907 round_trippers.go:580]     Audit-Id: a7fccde0-0401-4c43-b575-0b1a2b14a57b
	I0531 19:08:45.671480   71907 round_trippers.go:580]     Cache-Control: no-cache, private
	I0531 19:08:45.671504   71907 round_trippers.go:580]     Content-Type: application/json
	I0531 19:08:45.671513   71907 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61ef383b-79fe-44e8-8f9c-d13c8cd9f602
	I0531 19:08:45.671519   71907 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9616b646-f7d3-407a-9b78-ebb52861f5ac
	I0531 19:08:45.672019   71907 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-hhw4h","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"3f927f21-c6c7-43a2-a635-4fd3c672172d","resourceVersion":"424","creationTimestamp":"2023-05-31T19:08:43Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"463ad278-b9f3-4d58-8542-8ec925fef61a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-05-31T19:08:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"463ad278-b9f3-4d58-8542-8ec925fef61a\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6150 chars]
	I0531 19:08:45.672916   71907 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-025078
	I0531 19:08:45.672977   71907 round_trippers.go:469] Request Headers:
	I0531 19:08:45.673013   71907 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0531 19:08:45.673064   71907 round_trippers.go:473]     Accept: application/json, */*
	I0531 19:08:45.676544   71907 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0531 19:08:45.676572   71907 round_trippers.go:577] Response Headers:
	I0531 19:08:45.676580   71907 round_trippers.go:580]     Audit-Id: d758c577-78a6-483c-aeb7-3f230b569d8a
	I0531 19:08:45.676587   71907 round_trippers.go:580]     Cache-Control: no-cache, private
	I0531 19:08:45.676594   71907 round_trippers.go:580]     Content-Type: application/json
	I0531 19:08:45.676621   71907 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61ef383b-79fe-44e8-8f9c-d13c8cd9f602
	I0531 19:08:45.676634   71907 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9616b646-f7d3-407a-9b78-ebb52861f5ac
	I0531 19:08:45.676641   71907 round_trippers.go:580]     Date: Wed, 31 May 2023 19:08:45 GMT
	I0531 19:08:45.677064   71907 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025078","uid":"527e40a2-2af7-4de3-b462-a471bac33c44","resourceVersion":"421","creationTimestamp":"2023-05-31T19:08:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-025078","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7022875d4a054c2d518e5e5a7b9d500799d50140","minikube.k8s.io/name":"multinode-025078","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_05_31T19_08_30_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-05-31T19:08:26Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I0531 19:08:46.167568   71907 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-hhw4h
	I0531 19:08:46.167592   71907 round_trippers.go:469] Request Headers:
	I0531 19:08:46.167602   71907 round_trippers.go:473]     Accept: application/json, */*
	I0531 19:08:46.167609   71907 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0531 19:08:46.170220   71907 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0531 19:08:46.170251   71907 round_trippers.go:577] Response Headers:
	I0531 19:08:46.170260   71907 round_trippers.go:580]     Audit-Id: d793d6c4-51c9-4095-be60-c8eaaa6d4e4b
	I0531 19:08:46.170281   71907 round_trippers.go:580]     Cache-Control: no-cache, private
	I0531 19:08:46.170294   71907 round_trippers.go:580]     Content-Type: application/json
	I0531 19:08:46.170301   71907 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61ef383b-79fe-44e8-8f9c-d13c8cd9f602
	I0531 19:08:46.170310   71907 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9616b646-f7d3-407a-9b78-ebb52861f5ac
	I0531 19:08:46.170334   71907 round_trippers.go:580]     Date: Wed, 31 May 2023 19:08:46 GMT
	I0531 19:08:46.170478   71907 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-hhw4h","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"3f927f21-c6c7-43a2-a635-4fd3c672172d","resourceVersion":"424","creationTimestamp":"2023-05-31T19:08:43Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"463ad278-b9f3-4d58-8542-8ec925fef61a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-05-31T19:08:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"463ad278-b9f3-4d58-8542-8ec925fef61a\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6150 chars]
	I0531 19:08:46.171045   71907 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-025078
	I0531 19:08:46.171063   71907 round_trippers.go:469] Request Headers:
	I0531 19:08:46.171072   71907 round_trippers.go:473]     Accept: application/json, */*
	I0531 19:08:46.171080   71907 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0531 19:08:46.173527   71907 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0531 19:08:46.173559   71907 round_trippers.go:577] Response Headers:
	I0531 19:08:46.173568   71907 round_trippers.go:580]     Date: Wed, 31 May 2023 19:08:46 GMT
	I0531 19:08:46.173575   71907 round_trippers.go:580]     Audit-Id: a39c2b22-277e-4e16-a033-3c8f30658417
	I0531 19:08:46.173598   71907 round_trippers.go:580]     Cache-Control: no-cache, private
	I0531 19:08:46.173610   71907 round_trippers.go:580]     Content-Type: application/json
	I0531 19:08:46.173617   71907 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61ef383b-79fe-44e8-8f9c-d13c8cd9f602
	I0531 19:08:46.173624   71907 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9616b646-f7d3-407a-9b78-ebb52861f5ac
	I0531 19:08:46.173796   71907 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025078","uid":"527e40a2-2af7-4de3-b462-a471bac33c44","resourceVersion":"421","creationTimestamp":"2023-05-31T19:08:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-025078","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7022875d4a054c2d518e5e5a7b9d500799d50140","minikube.k8s.io/name":"multinode-025078","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_05_31T19_08_30_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-05-31T19:08:26Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I0531 19:08:46.667951   71907 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-hhw4h
	I0531 19:08:46.667975   71907 round_trippers.go:469] Request Headers:
	I0531 19:08:46.667985   71907 round_trippers.go:473]     Accept: application/json, */*
	I0531 19:08:46.667993   71907 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0531 19:08:46.670582   71907 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0531 19:08:46.670620   71907 round_trippers.go:577] Response Headers:
	I0531 19:08:46.670632   71907 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61ef383b-79fe-44e8-8f9c-d13c8cd9f602
	I0531 19:08:46.670652   71907 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9616b646-f7d3-407a-9b78-ebb52861f5ac
	I0531 19:08:46.670660   71907 round_trippers.go:580]     Date: Wed, 31 May 2023 19:08:46 GMT
	I0531 19:08:46.670668   71907 round_trippers.go:580]     Audit-Id: b726d7e4-6562-44f7-bd86-c6c2a2d19449
	I0531 19:08:46.670675   71907 round_trippers.go:580]     Cache-Control: no-cache, private
	I0531 19:08:46.670689   71907 round_trippers.go:580]     Content-Type: application/json
	I0531 19:08:46.670862   71907 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-hhw4h","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"3f927f21-c6c7-43a2-a635-4fd3c672172d","resourceVersion":"424","creationTimestamp":"2023-05-31T19:08:43Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"463ad278-b9f3-4d58-8542-8ec925fef61a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-05-31T19:08:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"463ad278-b9f3-4d58-8542-8ec925fef61a\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6150 chars]
	I0531 19:08:46.671435   71907 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-025078
	I0531 19:08:46.671452   71907 round_trippers.go:469] Request Headers:
	I0531 19:08:46.671461   71907 round_trippers.go:473]     Accept: application/json, */*
	I0531 19:08:46.671469   71907 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0531 19:08:46.673891   71907 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0531 19:08:46.673910   71907 round_trippers.go:577] Response Headers:
	I0531 19:08:46.673918   71907 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61ef383b-79fe-44e8-8f9c-d13c8cd9f602
	I0531 19:08:46.673925   71907 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9616b646-f7d3-407a-9b78-ebb52861f5ac
	I0531 19:08:46.673934   71907 round_trippers.go:580]     Date: Wed, 31 May 2023 19:08:46 GMT
	I0531 19:08:46.673942   71907 round_trippers.go:580]     Audit-Id: cc2ecee8-151c-4804-a91d-ac9cf3912010
	I0531 19:08:46.673952   71907 round_trippers.go:580]     Cache-Control: no-cache, private
	I0531 19:08:46.673965   71907 round_trippers.go:580]     Content-Type: application/json
	I0531 19:08:46.674115   71907 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025078","uid":"527e40a2-2af7-4de3-b462-a471bac33c44","resourceVersion":"421","creationTimestamp":"2023-05-31T19:08:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-025078","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7022875d4a054c2d518e5e5a7b9d500799d50140","minikube.k8s.io/name":"multinode-025078","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_05_31T19_08_30_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-05-31T19:08:26Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I0531 19:08:46.674517   71907 pod_ready.go:102] pod "coredns-5d78c9869d-hhw4h" in "kube-system" namespace has status "Ready":"False"
	I0531 19:08:47.167829   71907 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-hhw4h
	I0531 19:08:47.167851   71907 round_trippers.go:469] Request Headers:
	I0531 19:08:47.167861   71907 round_trippers.go:473]     Accept: application/json, */*
	I0531 19:08:47.167868   71907 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0531 19:08:47.170482   71907 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0531 19:08:47.170551   71907 round_trippers.go:577] Response Headers:
	I0531 19:08:47.170573   71907 round_trippers.go:580]     Audit-Id: 06069f9c-aa7a-4a20-b53e-2a4daf4c90df
	I0531 19:08:47.170599   71907 round_trippers.go:580]     Cache-Control: no-cache, private
	I0531 19:08:47.170660   71907 round_trippers.go:580]     Content-Type: application/json
	I0531 19:08:47.170672   71907 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61ef383b-79fe-44e8-8f9c-d13c8cd9f602
	I0531 19:08:47.170679   71907 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9616b646-f7d3-407a-9b78-ebb52861f5ac
	I0531 19:08:47.170686   71907 round_trippers.go:580]     Date: Wed, 31 May 2023 19:08:47 GMT
	I0531 19:08:47.170856   71907 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-hhw4h","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"3f927f21-c6c7-43a2-a635-4fd3c672172d","resourceVersion":"424","creationTimestamp":"2023-05-31T19:08:43Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"463ad278-b9f3-4d58-8542-8ec925fef61a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-05-31T19:08:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"463ad278-b9f3-4d58-8542-8ec925fef61a\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6150 chars]
	I0531 19:08:47.171369   71907 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-025078
	I0531 19:08:47.171384   71907 round_trippers.go:469] Request Headers:
	I0531 19:08:47.171393   71907 round_trippers.go:473]     Accept: application/json, */*
	I0531 19:08:47.171400   71907 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0531 19:08:47.173861   71907 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0531 19:08:47.173884   71907 round_trippers.go:577] Response Headers:
	I0531 19:08:47.173893   71907 round_trippers.go:580]     Audit-Id: d25e4179-0cbd-4d27-94a1-9cbeba8dbef9
	I0531 19:08:47.173900   71907 round_trippers.go:580]     Cache-Control: no-cache, private
	I0531 19:08:47.173907   71907 round_trippers.go:580]     Content-Type: application/json
	I0531 19:08:47.173914   71907 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61ef383b-79fe-44e8-8f9c-d13c8cd9f602
	I0531 19:08:47.173924   71907 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9616b646-f7d3-407a-9b78-ebb52861f5ac
	I0531 19:08:47.173936   71907 round_trippers.go:580]     Date: Wed, 31 May 2023 19:08:47 GMT
	I0531 19:08:47.174070   71907 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025078","uid":"527e40a2-2af7-4de3-b462-a471bac33c44","resourceVersion":"421","creationTimestamp":"2023-05-31T19:08:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-025078","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7022875d4a054c2d518e5e5a7b9d500799d50140","minikube.k8s.io/name":"multinode-025078","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_05_31T19_08_30_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-05-31T19:08:26Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I0531 19:08:47.667952   71907 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-hhw4h
	I0531 19:08:47.667976   71907 round_trippers.go:469] Request Headers:
	I0531 19:08:47.667987   71907 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0531 19:08:47.667995   71907 round_trippers.go:473]     Accept: application/json, */*
	I0531 19:08:47.679103   71907 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0531 19:08:47.679132   71907 round_trippers.go:577] Response Headers:
	I0531 19:08:47.679142   71907 round_trippers.go:580]     Content-Type: application/json
	I0531 19:08:47.679149   71907 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61ef383b-79fe-44e8-8f9c-d13c8cd9f602
	I0531 19:08:47.679156   71907 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9616b646-f7d3-407a-9b78-ebb52861f5ac
	I0531 19:08:47.679166   71907 round_trippers.go:580]     Date: Wed, 31 May 2023 19:08:47 GMT
	I0531 19:08:47.679176   71907 round_trippers.go:580]     Audit-Id: c9963196-77f7-4c8f-aae2-1a0faa178434
	I0531 19:08:47.679183   71907 round_trippers.go:580]     Cache-Control: no-cache, private
	I0531 19:08:47.679278   71907 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-hhw4h","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"3f927f21-c6c7-43a2-a635-4fd3c672172d","resourceVersion":"442","creationTimestamp":"2023-05-31T19:08:43Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"463ad278-b9f3-4d58-8542-8ec925fef61a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-05-31T19:08:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"463ad278-b9f3-4d58-8542-8ec925fef61a\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6263 chars]
	I0531 19:08:47.679818   71907 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-025078
	I0531 19:08:47.679834   71907 round_trippers.go:469] Request Headers:
	I0531 19:08:47.679843   71907 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0531 19:08:47.679853   71907 round_trippers.go:473]     Accept: application/json, */*
	I0531 19:08:47.682464   71907 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0531 19:08:47.682484   71907 round_trippers.go:577] Response Headers:
	I0531 19:08:47.682493   71907 round_trippers.go:580]     Cache-Control: no-cache, private
	I0531 19:08:47.682500   71907 round_trippers.go:580]     Content-Type: application/json
	I0531 19:08:47.682507   71907 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61ef383b-79fe-44e8-8f9c-d13c8cd9f602
	I0531 19:08:47.682514   71907 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9616b646-f7d3-407a-9b78-ebb52861f5ac
	I0531 19:08:47.682521   71907 round_trippers.go:580]     Date: Wed, 31 May 2023 19:08:47 GMT
	I0531 19:08:47.682527   71907 round_trippers.go:580]     Audit-Id: 6e68aa76-1a5c-48ff-a81b-afc360d1b2ea
	I0531 19:08:47.682699   71907 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025078","uid":"527e40a2-2af7-4de3-b462-a471bac33c44","resourceVersion":"421","creationTimestamp":"2023-05-31T19:08:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-025078","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7022875d4a054c2d518e5e5a7b9d500799d50140","minikube.k8s.io/name":"multinode-025078","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_05_31T19_08_30_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-05-31T19:08:26Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I0531 19:08:47.683135   71907 pod_ready.go:92] pod "coredns-5d78c9869d-hhw4h" in "kube-system" namespace has status "Ready":"True"
	I0531 19:08:47.683154   71907 pod_ready.go:81] duration metric: took 3.022483221s waiting for pod "coredns-5d78c9869d-hhw4h" in "kube-system" namespace to be "Ready" ...
	I0531 19:08:47.683165   71907 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-025078" in "kube-system" namespace to be "Ready" ...
	I0531 19:08:47.683230   71907 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-025078
	I0531 19:08:47.683239   71907 round_trippers.go:469] Request Headers:
	I0531 19:08:47.683248   71907 round_trippers.go:473]     Accept: application/json, */*
	I0531 19:08:47.683255   71907 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0531 19:08:47.685866   71907 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0531 19:08:47.685921   71907 round_trippers.go:577] Response Headers:
	I0531 19:08:47.685943   71907 round_trippers.go:580]     Audit-Id: 15719030-418b-4172-b7ec-2215b843b756
	I0531 19:08:47.685963   71907 round_trippers.go:580]     Cache-Control: no-cache, private
	I0531 19:08:47.685998   71907 round_trippers.go:580]     Content-Type: application/json
	I0531 19:08:47.686023   71907 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61ef383b-79fe-44e8-8f9c-d13c8cd9f602
	I0531 19:08:47.686045   71907 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9616b646-f7d3-407a-9b78-ebb52861f5ac
	I0531 19:08:47.686062   71907 round_trippers.go:580]     Date: Wed, 31 May 2023 19:08:47 GMT
	I0531 19:08:47.686218   71907 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-025078","namespace":"kube-system","uid":"ae9f84a1-9fff-46b4-b27d-4459cba13a8b","resourceVersion":"334","creationTimestamp":"2023-05-31T19:08:29Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"53d7bfe459fc0f27c783040121a9fd6b","kubernetes.io/config.mirror":"53d7bfe459fc0f27c783040121a9fd6b","kubernetes.io/config.seen":"2023-05-31T19:08:29.235417191Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-025078","uid":"527e40a2-2af7-4de3-b462-a471bac33c44","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-05-31T19:08:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6058 chars]
	I0531 19:08:47.686716   71907 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-025078
	I0531 19:08:47.686750   71907 round_trippers.go:469] Request Headers:
	I0531 19:08:47.686760   71907 round_trippers.go:473]     Accept: application/json, */*
	I0531 19:08:47.686768   71907 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0531 19:08:47.689232   71907 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0531 19:08:47.689286   71907 round_trippers.go:577] Response Headers:
	I0531 19:08:47.689308   71907 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61ef383b-79fe-44e8-8f9c-d13c8cd9f602
	I0531 19:08:47.689328   71907 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9616b646-f7d3-407a-9b78-ebb52861f5ac
	I0531 19:08:47.689365   71907 round_trippers.go:580]     Date: Wed, 31 May 2023 19:08:47 GMT
	I0531 19:08:47.689386   71907 round_trippers.go:580]     Audit-Id: 60688724-a91a-41c5-be3e-b8d2c1fdda34
	I0531 19:08:47.689405   71907 round_trippers.go:580]     Cache-Control: no-cache, private
	I0531 19:08:47.689412   71907 round_trippers.go:580]     Content-Type: application/json
	I0531 19:08:47.689567   71907 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025078","uid":"527e40a2-2af7-4de3-b462-a471bac33c44","resourceVersion":"421","creationTimestamp":"2023-05-31T19:08:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-025078","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7022875d4a054c2d518e5e5a7b9d500799d50140","minikube.k8s.io/name":"multinode-025078","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_05_31T19_08_30_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-05-31T19:08:26Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I0531 19:08:48.190172   71907 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-025078
	I0531 19:08:48.190197   71907 round_trippers.go:469] Request Headers:
	I0531 19:08:48.190207   71907 round_trippers.go:473]     Accept: application/json, */*
	I0531 19:08:48.190215   71907 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0531 19:08:48.192850   71907 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0531 19:08:48.192871   71907 round_trippers.go:577] Response Headers:
	I0531 19:08:48.192880   71907 round_trippers.go:580]     Audit-Id: d65112d5-ea8c-4d76-af1a-0b46bebff723
	I0531 19:08:48.192887   71907 round_trippers.go:580]     Cache-Control: no-cache, private
	I0531 19:08:48.192893   71907 round_trippers.go:580]     Content-Type: application/json
	I0531 19:08:48.192900   71907 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61ef383b-79fe-44e8-8f9c-d13c8cd9f602
	I0531 19:08:48.192907   71907 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9616b646-f7d3-407a-9b78-ebb52861f5ac
	I0531 19:08:48.192916   71907 round_trippers.go:580]     Date: Wed, 31 May 2023 19:08:48 GMT
	I0531 19:08:48.193091   71907 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-025078","namespace":"kube-system","uid":"ae9f84a1-9fff-46b4-b27d-4459cba13a8b","resourceVersion":"334","creationTimestamp":"2023-05-31T19:08:29Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"53d7bfe459fc0f27c783040121a9fd6b","kubernetes.io/config.mirror":"53d7bfe459fc0f27c783040121a9fd6b","kubernetes.io/config.seen":"2023-05-31T19:08:29.235417191Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-025078","uid":"527e40a2-2af7-4de3-b462-a471bac33c44","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-05-31T19:08:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6058 chars]
	I0531 19:08:48.193601   71907 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-025078
	I0531 19:08:48.193615   71907 round_trippers.go:469] Request Headers:
	I0531 19:08:48.193624   71907 round_trippers.go:473]     Accept: application/json, */*
	I0531 19:08:48.193631   71907 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0531 19:08:48.196023   71907 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0531 19:08:48.196058   71907 round_trippers.go:577] Response Headers:
	I0531 19:08:48.196068   71907 round_trippers.go:580]     Cache-Control: no-cache, private
	I0531 19:08:48.196075   71907 round_trippers.go:580]     Content-Type: application/json
	I0531 19:08:48.196081   71907 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61ef383b-79fe-44e8-8f9c-d13c8cd9f602
	I0531 19:08:48.196088   71907 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9616b646-f7d3-407a-9b78-ebb52861f5ac
	I0531 19:08:48.196097   71907 round_trippers.go:580]     Date: Wed, 31 May 2023 19:08:48 GMT
	I0531 19:08:48.196112   71907 round_trippers.go:580]     Audit-Id: 50a7b8d8-60ce-477b-b054-982300cd728d
	I0531 19:08:48.196266   71907 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025078","uid":"527e40a2-2af7-4de3-b462-a471bac33c44","resourceVersion":"421","creationTimestamp":"2023-05-31T19:08:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-025078","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7022875d4a054c2d518e5e5a7b9d500799d50140","minikube.k8s.io/name":"multinode-025078","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_05_31T19_08_30_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-05-31T19:08:26Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I0531 19:08:48.690273   71907 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-025078
	I0531 19:08:48.690297   71907 round_trippers.go:469] Request Headers:
	I0531 19:08:48.690307   71907 round_trippers.go:473]     Accept: application/json, */*
	I0531 19:08:48.690314   71907 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0531 19:08:48.692908   71907 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0531 19:08:48.692972   71907 round_trippers.go:577] Response Headers:
	I0531 19:08:48.692995   71907 round_trippers.go:580]     Audit-Id: b848db61-5359-4085-be54-0710c6efeda1
	I0531 19:08:48.693019   71907 round_trippers.go:580]     Cache-Control: no-cache, private
	I0531 19:08:48.693053   71907 round_trippers.go:580]     Content-Type: application/json
	I0531 19:08:48.693081   71907 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61ef383b-79fe-44e8-8f9c-d13c8cd9f602
	I0531 19:08:48.693106   71907 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9616b646-f7d3-407a-9b78-ebb52861f5ac
	I0531 19:08:48.693133   71907 round_trippers.go:580]     Date: Wed, 31 May 2023 19:08:48 GMT
	I0531 19:08:48.693268   71907 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-025078","namespace":"kube-system","uid":"ae9f84a1-9fff-46b4-b27d-4459cba13a8b","resourceVersion":"334","creationTimestamp":"2023-05-31T19:08:29Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"53d7bfe459fc0f27c783040121a9fd6b","kubernetes.io/config.mirror":"53d7bfe459fc0f27c783040121a9fd6b","kubernetes.io/config.seen":"2023-05-31T19:08:29.235417191Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-025078","uid":"527e40a2-2af7-4de3-b462-a471bac33c44","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-05-31T19:08:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6058 chars]
	I0531 19:08:48.693779   71907 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-025078
	I0531 19:08:48.693797   71907 round_trippers.go:469] Request Headers:
	I0531 19:08:48.693806   71907 round_trippers.go:473]     Accept: application/json, */*
	I0531 19:08:48.693814   71907 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0531 19:08:48.696381   71907 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0531 19:08:48.696402   71907 round_trippers.go:577] Response Headers:
	I0531 19:08:48.696411   71907 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61ef383b-79fe-44e8-8f9c-d13c8cd9f602
	I0531 19:08:48.696418   71907 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9616b646-f7d3-407a-9b78-ebb52861f5ac
	I0531 19:08:48.696424   71907 round_trippers.go:580]     Date: Wed, 31 May 2023 19:08:48 GMT
	I0531 19:08:48.696431   71907 round_trippers.go:580]     Audit-Id: 8cc7805b-f7b9-4ac9-b36f-9a5005717b9a
	I0531 19:08:48.696438   71907 round_trippers.go:580]     Cache-Control: no-cache, private
	I0531 19:08:48.696447   71907 round_trippers.go:580]     Content-Type: application/json
	I0531 19:08:48.696563   71907 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025078","uid":"527e40a2-2af7-4de3-b462-a471bac33c44","resourceVersion":"421","creationTimestamp":"2023-05-31T19:08:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-025078","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7022875d4a054c2d518e5e5a7b9d500799d50140","minikube.k8s.io/name":"multinode-025078","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_05_31T19_08_30_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-05-31T19:08:26Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I0531 19:08:49.190170   71907 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-025078
	I0531 19:08:49.190190   71907 round_trippers.go:469] Request Headers:
	I0531 19:08:49.190200   71907 round_trippers.go:473]     Accept: application/json, */*
	I0531 19:08:49.190208   71907 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0531 19:08:49.192924   71907 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0531 19:08:49.192992   71907 round_trippers.go:577] Response Headers:
	I0531 19:08:49.193015   71907 round_trippers.go:580]     Date: Wed, 31 May 2023 19:08:49 GMT
	I0531 19:08:49.193041   71907 round_trippers.go:580]     Audit-Id: dcaa4287-bc67-44e2-9169-5ecaf3877553
	I0531 19:08:49.193074   71907 round_trippers.go:580]     Cache-Control: no-cache, private
	I0531 19:08:49.193095   71907 round_trippers.go:580]     Content-Type: application/json
	I0531 19:08:49.193118   71907 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61ef383b-79fe-44e8-8f9c-d13c8cd9f602
	I0531 19:08:49.193143   71907 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9616b646-f7d3-407a-9b78-ebb52861f5ac
	I0531 19:08:49.193266   71907 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-025078","namespace":"kube-system","uid":"ae9f84a1-9fff-46b4-b27d-4459cba13a8b","resourceVersion":"334","creationTimestamp":"2023-05-31T19:08:29Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"53d7bfe459fc0f27c783040121a9fd6b","kubernetes.io/config.mirror":"53d7bfe459fc0f27c783040121a9fd6b","kubernetes.io/config.seen":"2023-05-31T19:08:29.235417191Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-025078","uid":"527e40a2-2af7-4de3-b462-a471bac33c44","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-05-31T19:08:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6058 chars]
	I0531 19:08:49.193761   71907 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-025078
	I0531 19:08:49.193777   71907 round_trippers.go:469] Request Headers:
	I0531 19:08:49.193785   71907 round_trippers.go:473]     Accept: application/json, */*
	I0531 19:08:49.193792   71907 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0531 19:08:49.196267   71907 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0531 19:08:49.196290   71907 round_trippers.go:577] Response Headers:
	I0531 19:08:49.196299   71907 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9616b646-f7d3-407a-9b78-ebb52861f5ac
	I0531 19:08:49.196307   71907 round_trippers.go:580]     Date: Wed, 31 May 2023 19:08:49 GMT
	I0531 19:08:49.196314   71907 round_trippers.go:580]     Audit-Id: f103b072-4a94-4147-90c9-42f6a59b36ed
	I0531 19:08:49.196321   71907 round_trippers.go:580]     Cache-Control: no-cache, private
	I0531 19:08:49.196332   71907 round_trippers.go:580]     Content-Type: application/json
	I0531 19:08:49.196344   71907 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61ef383b-79fe-44e8-8f9c-d13c8cd9f602
	I0531 19:08:49.196501   71907 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025078","uid":"527e40a2-2af7-4de3-b462-a471bac33c44","resourceVersion":"421","creationTimestamp":"2023-05-31T19:08:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-025078","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7022875d4a054c2d518e5e5a7b9d500799d50140","minikube.k8s.io/name":"multinode-025078","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_05_31T19_08_30_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-05-31T19:08:26Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I0531 19:08:49.690149   71907 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-025078
	I0531 19:08:49.690173   71907 round_trippers.go:469] Request Headers:
	I0531 19:08:49.690183   71907 round_trippers.go:473]     Accept: application/json, */*
	I0531 19:08:49.690192   71907 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0531 19:08:49.692811   71907 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0531 19:08:49.692877   71907 round_trippers.go:577] Response Headers:
	I0531 19:08:49.692901   71907 round_trippers.go:580]     Date: Wed, 31 May 2023 19:08:49 GMT
	I0531 19:08:49.692916   71907 round_trippers.go:580]     Audit-Id: 97de1d33-0f29-4c72-89c3-8f77e3ed4cab
	I0531 19:08:49.692924   71907 round_trippers.go:580]     Cache-Control: no-cache, private
	I0531 19:08:49.692931   71907 round_trippers.go:580]     Content-Type: application/json
	I0531 19:08:49.692951   71907 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61ef383b-79fe-44e8-8f9c-d13c8cd9f602
	I0531 19:08:49.692963   71907 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9616b646-f7d3-407a-9b78-ebb52861f5ac
	I0531 19:08:49.699626   71907 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-025078","namespace":"kube-system","uid":"ae9f84a1-9fff-46b4-b27d-4459cba13a8b","resourceVersion":"450","creationTimestamp":"2023-05-31T19:08:29Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"53d7bfe459fc0f27c783040121a9fd6b","kubernetes.io/config.mirror":"53d7bfe459fc0f27c783040121a9fd6b","kubernetes.io/config.seen":"2023-05-31T19:08:29.235417191Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-025078","uid":"527e40a2-2af7-4de3-b462-a471bac33c44","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-05-31T19:08:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 5833 chars]
	I0531 19:08:49.700226   71907 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-025078
	I0531 19:08:49.700242   71907 round_trippers.go:469] Request Headers:
	I0531 19:08:49.700252   71907 round_trippers.go:473]     Accept: application/json, */*
	I0531 19:08:49.700259   71907 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0531 19:08:49.702964   71907 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0531 19:08:49.702987   71907 round_trippers.go:577] Response Headers:
	I0531 19:08:49.702996   71907 round_trippers.go:580]     Cache-Control: no-cache, private
	I0531 19:08:49.703003   71907 round_trippers.go:580]     Content-Type: application/json
	I0531 19:08:49.703009   71907 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61ef383b-79fe-44e8-8f9c-d13c8cd9f602
	I0531 19:08:49.703016   71907 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9616b646-f7d3-407a-9b78-ebb52861f5ac
	I0531 19:08:49.703023   71907 round_trippers.go:580]     Date: Wed, 31 May 2023 19:08:49 GMT
	I0531 19:08:49.703030   71907 round_trippers.go:580]     Audit-Id: 17307d85-4457-4171-ac20-6982c394569e
	I0531 19:08:49.703230   71907 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025078","uid":"527e40a2-2af7-4de3-b462-a471bac33c44","resourceVersion":"421","creationTimestamp":"2023-05-31T19:08:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-025078","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7022875d4a054c2d518e5e5a7b9d500799d50140","minikube.k8s.io/name":"multinode-025078","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_05_31T19_08_30_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-05-31T19:08:26Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I0531 19:08:49.703619   71907 pod_ready.go:92] pod "etcd-multinode-025078" in "kube-system" namespace has status "Ready":"True"
	I0531 19:08:49.703631   71907 pod_ready.go:81] duration metric: took 2.02045807s waiting for pod "etcd-multinode-025078" in "kube-system" namespace to be "Ready" ...
	I0531 19:08:49.703646   71907 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-025078" in "kube-system" namespace to be "Ready" ...
	I0531 19:08:49.703702   71907 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-025078
	I0531 19:08:49.703707   71907 round_trippers.go:469] Request Headers:
	I0531 19:08:49.703715   71907 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0531 19:08:49.703722   71907 round_trippers.go:473]     Accept: application/json, */*
	I0531 19:08:49.706121   71907 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0531 19:08:49.706138   71907 round_trippers.go:577] Response Headers:
	I0531 19:08:49.706146   71907 round_trippers.go:580]     Audit-Id: 9496aafa-a65c-44fa-9a0d-4b779db1b3d9
	I0531 19:08:49.706198   71907 round_trippers.go:580]     Cache-Control: no-cache, private
	I0531 19:08:49.706210   71907 round_trippers.go:580]     Content-Type: application/json
	I0531 19:08:49.706216   71907 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61ef383b-79fe-44e8-8f9c-d13c8cd9f602
	I0531 19:08:49.706224   71907 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9616b646-f7d3-407a-9b78-ebb52861f5ac
	I0531 19:08:49.706230   71907 round_trippers.go:580]     Date: Wed, 31 May 2023 19:08:49 GMT
	I0531 19:08:49.706407   71907 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-025078","namespace":"kube-system","uid":"c0951b5b-7914-475e-93d5-2b8513832b1e","resourceVersion":"451","creationTimestamp":"2023-05-31T19:08:27Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"030748278c563db19133b8fcd1436188","kubernetes.io/config.mirror":"030748278c563db19133b8fcd1436188","kubernetes.io/config.seen":"2023-05-31T19:08:20.983574828Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-025078","uid":"527e40a2-2af7-4de3-b462-a471bac33c44","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-05-31T19:08:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 8219 chars]
	I0531 19:08:49.706979   71907 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-025078
	I0531 19:08:49.706996   71907 round_trippers.go:469] Request Headers:
	I0531 19:08:49.707004   71907 round_trippers.go:473]     Accept: application/json, */*
	I0531 19:08:49.707011   71907 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0531 19:08:49.709283   71907 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0531 19:08:49.709327   71907 round_trippers.go:577] Response Headers:
	I0531 19:08:49.709363   71907 round_trippers.go:580]     Cache-Control: no-cache, private
	I0531 19:08:49.709391   71907 round_trippers.go:580]     Content-Type: application/json
	I0531 19:08:49.709411   71907 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61ef383b-79fe-44e8-8f9c-d13c8cd9f602
	I0531 19:08:49.709446   71907 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9616b646-f7d3-407a-9b78-ebb52861f5ac
	I0531 19:08:49.709471   71907 round_trippers.go:580]     Date: Wed, 31 May 2023 19:08:49 GMT
	I0531 19:08:49.709495   71907 round_trippers.go:580]     Audit-Id: 0dcff9f7-03d2-420e-ae76-b56283225b85
	I0531 19:08:49.709644   71907 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025078","uid":"527e40a2-2af7-4de3-b462-a471bac33c44","resourceVersion":"421","creationTimestamp":"2023-05-31T19:08:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-025078","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7022875d4a054c2d518e5e5a7b9d500799d50140","minikube.k8s.io/name":"multinode-025078","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_05_31T19_08_30_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-05-31T19:08:26Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I0531 19:08:49.710069   71907 pod_ready.go:92] pod "kube-apiserver-multinode-025078" in "kube-system" namespace has status "Ready":"True"
	I0531 19:08:49.710086   71907 pod_ready.go:81] duration metric: took 6.433248ms waiting for pod "kube-apiserver-multinode-025078" in "kube-system" namespace to be "Ready" ...
	I0531 19:08:49.710097   71907 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-025078" in "kube-system" namespace to be "Ready" ...
	I0531 19:08:49.710154   71907 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-025078
	I0531 19:08:49.710164   71907 round_trippers.go:469] Request Headers:
	I0531 19:08:49.710172   71907 round_trippers.go:473]     Accept: application/json, */*
	I0531 19:08:49.710179   71907 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0531 19:08:49.712509   71907 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0531 19:08:49.712524   71907 round_trippers.go:577] Response Headers:
	I0531 19:08:49.712532   71907 round_trippers.go:580]     Cache-Control: no-cache, private
	I0531 19:08:49.712539   71907 round_trippers.go:580]     Content-Type: application/json
	I0531 19:08:49.712545   71907 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61ef383b-79fe-44e8-8f9c-d13c8cd9f602
	I0531 19:08:49.712552   71907 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9616b646-f7d3-407a-9b78-ebb52861f5ac
	I0531 19:08:49.712559   71907 round_trippers.go:580]     Date: Wed, 31 May 2023 19:08:49 GMT
	I0531 19:08:49.712565   71907 round_trippers.go:580]     Audit-Id: 4f7770d8-9c82-46e0-9473-080e7ee554cf
	I0531 19:08:49.712714   71907 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-025078","namespace":"kube-system","uid":"5c741586-53ad-4e55-9aba-d0f8355f2eec","resourceVersion":"452","creationTimestamp":"2023-05-31T19:08:27Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"0260a712665b8daacf232950e34a5748","kubernetes.io/config.mirror":"0260a712665b8daacf232950e34a5748","kubernetes.io/config.seen":"2023-05-31T19:08:20.983576034Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-025078","uid":"527e40a2-2af7-4de3-b462-a471bac33c44","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-05-31T19:08:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7794 chars]
	I0531 19:08:49.713214   71907 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-025078
	I0531 19:08:49.713228   71907 round_trippers.go:469] Request Headers:
	I0531 19:08:49.713237   71907 round_trippers.go:473]     Accept: application/json, */*
	I0531 19:08:49.713244   71907 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0531 19:08:49.715508   71907 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0531 19:08:49.715528   71907 round_trippers.go:577] Response Headers:
	I0531 19:08:49.715537   71907 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9616b646-f7d3-407a-9b78-ebb52861f5ac
	I0531 19:08:49.715544   71907 round_trippers.go:580]     Date: Wed, 31 May 2023 19:08:49 GMT
	I0531 19:08:49.715551   71907 round_trippers.go:580]     Audit-Id: fa18cc81-2ee4-4447-924b-f6a10a8df301
	I0531 19:08:49.715558   71907 round_trippers.go:580]     Cache-Control: no-cache, private
	I0531 19:08:49.715568   71907 round_trippers.go:580]     Content-Type: application/json
	I0531 19:08:49.715575   71907 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61ef383b-79fe-44e8-8f9c-d13c8cd9f602
	I0531 19:08:49.715827   71907 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025078","uid":"527e40a2-2af7-4de3-b462-a471bac33c44","resourceVersion":"421","creationTimestamp":"2023-05-31T19:08:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-025078","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7022875d4a054c2d518e5e5a7b9d500799d50140","minikube.k8s.io/name":"multinode-025078","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_05_31T19_08_30_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-05-31T19:08:26Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I0531 19:08:49.716195   71907 pod_ready.go:92] pod "kube-controller-manager-multinode-025078" in "kube-system" namespace has status "Ready":"True"
	I0531 19:08:49.716213   71907 pod_ready.go:81] duration metric: took 6.104898ms waiting for pod "kube-controller-manager-multinode-025078" in "kube-system" namespace to be "Ready" ...
	I0531 19:08:49.716225   71907 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-ws8xb" in "kube-system" namespace to be "Ready" ...
	I0531 19:08:49.716277   71907 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-ws8xb
	I0531 19:08:49.716286   71907 round_trippers.go:469] Request Headers:
	I0531 19:08:49.716294   71907 round_trippers.go:473]     Accept: application/json, */*
	I0531 19:08:49.716301   71907 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0531 19:08:49.718520   71907 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0531 19:08:49.718541   71907 round_trippers.go:577] Response Headers:
	I0531 19:08:49.718549   71907 round_trippers.go:580]     Date: Wed, 31 May 2023 19:08:49 GMT
	I0531 19:08:49.718556   71907 round_trippers.go:580]     Audit-Id: ae8d59be-81bc-42f4-9160-1be0365b5fe0
	I0531 19:08:49.718562   71907 round_trippers.go:580]     Cache-Control: no-cache, private
	I0531 19:08:49.718569   71907 round_trippers.go:580]     Content-Type: application/json
	I0531 19:08:49.718576   71907 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61ef383b-79fe-44e8-8f9c-d13c8cd9f602
	I0531 19:08:49.718585   71907 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9616b646-f7d3-407a-9b78-ebb52861f5ac
	I0531 19:08:49.718859   71907 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-ws8xb","generateName":"kube-proxy-","namespace":"kube-system","uid":"d0bcd8bd-2828-4ad8-affa-aa8fb8b01b14","resourceVersion":"415","creationTimestamp":"2023-05-31T19:08:42Z","labels":{"controller-revision-hash":"8bdf7b6c","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"863e2ca0-e19e-4d3d-aad8-d9f365be6205","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-05-31T19:08:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"863e2ca0-e19e-4d3d-aad8-d9f365be6205\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:req
uiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{\ [truncated 5508 chars]
	I0531 19:08:49.719307   71907 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-025078
	I0531 19:08:49.719320   71907 round_trippers.go:469] Request Headers:
	I0531 19:08:49.719328   71907 round_trippers.go:473]     Accept: application/json, */*
	I0531 19:08:49.719335   71907 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0531 19:08:49.721495   71907 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0531 19:08:49.721525   71907 round_trippers.go:577] Response Headers:
	I0531 19:08:49.721534   71907 round_trippers.go:580]     Date: Wed, 31 May 2023 19:08:49 GMT
	I0531 19:08:49.721541   71907 round_trippers.go:580]     Audit-Id: f6dc4a42-0f7d-4a87-b879-969c65b22046
	I0531 19:08:49.721554   71907 round_trippers.go:580]     Cache-Control: no-cache, private
	I0531 19:08:49.721565   71907 round_trippers.go:580]     Content-Type: application/json
	I0531 19:08:49.721572   71907 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61ef383b-79fe-44e8-8f9c-d13c8cd9f602
	I0531 19:08:49.721578   71907 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9616b646-f7d3-407a-9b78-ebb52861f5ac
	I0531 19:08:49.721985   71907 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025078","uid":"527e40a2-2af7-4de3-b462-a471bac33c44","resourceVersion":"421","creationTimestamp":"2023-05-31T19:08:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-025078","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7022875d4a054c2d518e5e5a7b9d500799d50140","minikube.k8s.io/name":"multinode-025078","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_05_31T19_08_30_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-05-31T19:08:26Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I0531 19:08:49.722353   71907 pod_ready.go:92] pod "kube-proxy-ws8xb" in "kube-system" namespace has status "Ready":"True"
	I0531 19:08:49.722369   71907 pod_ready.go:81] duration metric: took 6.134158ms waiting for pod "kube-proxy-ws8xb" in "kube-system" namespace to be "Ready" ...
	I0531 19:08:49.722378   71907 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-025078" in "kube-system" namespace to be "Ready" ...
	I0531 19:08:49.722432   71907 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-025078
	I0531 19:08:49.722442   71907 round_trippers.go:469] Request Headers:
	I0531 19:08:49.722450   71907 round_trippers.go:473]     Accept: application/json, */*
	I0531 19:08:49.722458   71907 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0531 19:08:49.724848   71907 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0531 19:08:49.724897   71907 round_trippers.go:577] Response Headers:
	I0531 19:08:49.724920   71907 round_trippers.go:580]     Audit-Id: 00054ec9-9fab-4bdc-9a9c-36969ef54eb7
	I0531 19:08:49.724941   71907 round_trippers.go:580]     Cache-Control: no-cache, private
	I0531 19:08:49.724976   71907 round_trippers.go:580]     Content-Type: application/json
	I0531 19:08:49.725004   71907 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61ef383b-79fe-44e8-8f9c-d13c8cd9f602
	I0531 19:08:49.725017   71907 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9616b646-f7d3-407a-9b78-ebb52861f5ac
	I0531 19:08:49.725025   71907 round_trippers.go:580]     Date: Wed, 31 May 2023 19:08:49 GMT
	I0531 19:08:49.725166   71907 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-025078","namespace":"kube-system","uid":"72981a73-7d31-416c-a55e-9e619fd02ad5","resourceVersion":"449","creationTimestamp":"2023-05-31T19:08:29Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"b11a18f131019d2811faf18cbc677083","kubernetes.io/config.mirror":"b11a18f131019d2811faf18cbc677083","kubernetes.io/config.seen":"2023-05-31T19:08:29.235426307Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-025078","uid":"527e40a2-2af7-4de3-b462-a471bac33c44","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-05-31T19:08:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4676 chars]
	I0531 19:08:49.725596   71907 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-025078
	I0531 19:08:49.725610   71907 round_trippers.go:469] Request Headers:
	I0531 19:08:49.725618   71907 round_trippers.go:473]     Accept: application/json, */*
	I0531 19:08:49.725626   71907 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0531 19:08:49.727888   71907 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0531 19:08:49.727908   71907 round_trippers.go:577] Response Headers:
	I0531 19:08:49.727916   71907 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9616b646-f7d3-407a-9b78-ebb52861f5ac
	I0531 19:08:49.727923   71907 round_trippers.go:580]     Date: Wed, 31 May 2023 19:08:49 GMT
	I0531 19:08:49.727930   71907 round_trippers.go:580]     Audit-Id: 2a011622-0766-48dc-98b8-13857326a3f7
	I0531 19:08:49.727937   71907 round_trippers.go:580]     Cache-Control: no-cache, private
	I0531 19:08:49.727947   71907 round_trippers.go:580]     Content-Type: application/json
	I0531 19:08:49.727954   71907 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61ef383b-79fe-44e8-8f9c-d13c8cd9f602
	I0531 19:08:49.728137   71907 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025078","uid":"527e40a2-2af7-4de3-b462-a471bac33c44","resourceVersion":"421","creationTimestamp":"2023-05-31T19:08:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-025078","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7022875d4a054c2d518e5e5a7b9d500799d50140","minikube.k8s.io/name":"multinode-025078","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_05_31T19_08_30_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-05-31T19:08:26Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I0531 19:08:49.728504   71907 pod_ready.go:92] pod "kube-scheduler-multinode-025078" in "kube-system" namespace has status "Ready":"True"
	I0531 19:08:49.728519   71907 pod_ready.go:81] duration metric: took 6.129587ms waiting for pod "kube-scheduler-multinode-025078" in "kube-system" namespace to be "Ready" ...
	I0531 19:08:49.728530   71907 pod_ready.go:38] duration metric: took 5.078293746s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0531 19:08:49.728547   71907 api_server.go:52] waiting for apiserver process to appear ...
	I0531 19:08:49.728602   71907 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 19:08:49.740313   71907 command_runner.go:130] > 1239
	I0531 19:08:49.741591   71907 api_server.go:72] duration metric: took 6.295250828s to wait for apiserver process to appear ...
	I0531 19:08:49.741610   71907 api_server.go:88] waiting for apiserver healthz status ...
	I0531 19:08:49.741639   71907 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0531 19:08:49.751660   71907 api_server.go:279] https://192.168.58.2:8443/healthz returned 200:
	ok
	I0531 19:08:49.751746   71907 round_trippers.go:463] GET https://192.168.58.2:8443/version
	I0531 19:08:49.751759   71907 round_trippers.go:469] Request Headers:
	I0531 19:08:49.751769   71907 round_trippers.go:473]     Accept: application/json, */*
	I0531 19:08:49.751792   71907 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0531 19:08:49.752963   71907 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0531 19:08:49.753012   71907 round_trippers.go:577] Response Headers:
	I0531 19:08:49.753036   71907 round_trippers.go:580]     Content-Type: application/json
	I0531 19:08:49.753045   71907 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61ef383b-79fe-44e8-8f9c-d13c8cd9f602
	I0531 19:08:49.753057   71907 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9616b646-f7d3-407a-9b78-ebb52861f5ac
	I0531 19:08:49.753065   71907 round_trippers.go:580]     Content-Length: 263
	I0531 19:08:49.753075   71907 round_trippers.go:580]     Date: Wed, 31 May 2023 19:08:49 GMT
	I0531 19:08:49.753086   71907 round_trippers.go:580]     Audit-Id: 7b25ecdf-f52d-4718-84c2-9ebc739727a7
	I0531 19:08:49.753096   71907 round_trippers.go:580]     Cache-Control: no-cache, private
	I0531 19:08:49.753125   71907 request.go:1188] Response Body: {
	  "major": "1",
	  "minor": "27",
	  "gitVersion": "v1.27.2",
	  "gitCommit": "7f6f68fdabc4df88cfea2dcf9a19b2b830f1e647",
	  "gitTreeState": "clean",
	  "buildDate": "2023-05-17T14:13:28Z",
	  "goVersion": "go1.20.4",
	  "compiler": "gc",
	  "platform": "linux/arm64"
	}
	I0531 19:08:49.753222   71907 api_server.go:141] control plane version: v1.27.2
	I0531 19:08:49.753240   71907 api_server.go:131] duration metric: took 11.623276ms to wait for apiserver health ...
	I0531 19:08:49.753248   71907 system_pods.go:43] waiting for kube-system pods to appear ...
	I0531 19:08:49.890659   71907 request.go:628] Waited for 137.329233ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I0531 19:08:49.890784   71907 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I0531 19:08:49.890803   71907 round_trippers.go:469] Request Headers:
	I0531 19:08:49.890813   71907 round_trippers.go:473]     Accept: application/json, */*
	I0531 19:08:49.890824   71907 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0531 19:08:49.894623   71907 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0531 19:08:49.894649   71907 round_trippers.go:577] Response Headers:
	I0531 19:08:49.894658   71907 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9616b646-f7d3-407a-9b78-ebb52861f5ac
	I0531 19:08:49.894666   71907 round_trippers.go:580]     Date: Wed, 31 May 2023 19:08:49 GMT
	I0531 19:08:49.894680   71907 round_trippers.go:580]     Audit-Id: 8778e16c-2ec7-4642-9bd2-ccaa08ca76c2
	I0531 19:08:49.894691   71907 round_trippers.go:580]     Cache-Control: no-cache, private
	I0531 19:08:49.894702   71907 round_trippers.go:580]     Content-Type: application/json
	I0531 19:08:49.894709   71907 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61ef383b-79fe-44e8-8f9c-d13c8cd9f602
	I0531 19:08:49.895676   71907 request.go:1188] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"453"},"items":[{"metadata":{"name":"coredns-5d78c9869d-hhw4h","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"3f927f21-c6c7-43a2-a635-4fd3c672172d","resourceVersion":"442","creationTimestamp":"2023-05-31T19:08:43Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"463ad278-b9f3-4d58-8542-8ec925fef61a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-05-31T19:08:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"463ad278-b9f3-4d58-8542-8ec925fef61a\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 55611 chars]
	I0531 19:08:49.898108   71907 system_pods.go:59] 8 kube-system pods found
	I0531 19:08:49.898136   71907 system_pods.go:61] "coredns-5d78c9869d-hhw4h" [3f927f21-c6c7-43a2-a635-4fd3c672172d] Running
	I0531 19:08:49.898142   71907 system_pods.go:61] "etcd-multinode-025078" [ae9f84a1-9fff-46b4-b27d-4459cba13a8b] Running
	I0531 19:08:49.898147   71907 system_pods.go:61] "kindnet-556pq" [9385291d-39c2-4506-8a97-e8e9f080feb1] Running
	I0531 19:08:49.898152   71907 system_pods.go:61] "kube-apiserver-multinode-025078" [c0951b5b-7914-475e-93d5-2b8513832b1e] Running
	I0531 19:08:49.898158   71907 system_pods.go:61] "kube-controller-manager-multinode-025078" [5c741586-53ad-4e55-9aba-d0f8355f2eec] Running
	I0531 19:08:49.898162   71907 system_pods.go:61] "kube-proxy-ws8xb" [d0bcd8bd-2828-4ad8-affa-aa8fb8b01b14] Running
	I0531 19:08:49.898168   71907 system_pods.go:61] "kube-scheduler-multinode-025078" [72981a73-7d31-416c-a55e-9e619fd02ad5] Running
	I0531 19:08:49.898173   71907 system_pods.go:61] "storage-provisioner" [e44bd6e4-ee1c-488f-9b89-90ae9b5880f8] Running
	I0531 19:08:49.898179   71907 system_pods.go:74] duration metric: took 144.915275ms to wait for pod list to return data ...
	I0531 19:08:49.898186   71907 default_sa.go:34] waiting for default service account to be created ...
	I0531 19:08:50.090612   71907 request.go:628] Waited for 192.349636ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/default/serviceaccounts
	I0531 19:08:50.090691   71907 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/default/serviceaccounts
	I0531 19:08:50.090701   71907 round_trippers.go:469] Request Headers:
	I0531 19:08:50.090711   71907 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0531 19:08:50.090719   71907 round_trippers.go:473]     Accept: application/json, */*
	I0531 19:08:50.093442   71907 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0531 19:08:50.093486   71907 round_trippers.go:577] Response Headers:
	I0531 19:08:50.093496   71907 round_trippers.go:580]     Date: Wed, 31 May 2023 19:08:50 GMT
	I0531 19:08:50.093512   71907 round_trippers.go:580]     Audit-Id: 5b776148-61dc-4480-a305-84ad1b04156e
	I0531 19:08:50.093524   71907 round_trippers.go:580]     Cache-Control: no-cache, private
	I0531 19:08:50.093534   71907 round_trippers.go:580]     Content-Type: application/json
	I0531 19:08:50.093550   71907 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61ef383b-79fe-44e8-8f9c-d13c8cd9f602
	I0531 19:08:50.093557   71907 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9616b646-f7d3-407a-9b78-ebb52861f5ac
	I0531 19:08:50.093566   71907 round_trippers.go:580]     Content-Length: 261
	I0531 19:08:50.093590   71907 request.go:1188] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"454"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"da0b3483-1d9a-46a7-bc9a-2d464ee91559","resourceVersion":"348","creationTimestamp":"2023-05-31T19:08:42Z"}}]}
	I0531 19:08:50.093808   71907 default_sa.go:45] found service account: "default"
	I0531 19:08:50.093824   71907 default_sa.go:55] duration metric: took 195.63352ms for default service account to be created ...
	I0531 19:08:50.093833   71907 system_pods.go:116] waiting for k8s-apps to be running ...
	I0531 19:08:50.291208   71907 request.go:628] Waited for 197.290212ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I0531 19:08:50.291277   71907 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I0531 19:08:50.291288   71907 round_trippers.go:469] Request Headers:
	I0531 19:08:50.291297   71907 round_trippers.go:473]     Accept: application/json, */*
	I0531 19:08:50.291305   71907 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0531 19:08:50.295150   71907 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0531 19:08:50.295189   71907 round_trippers.go:577] Response Headers:
	I0531 19:08:50.295198   71907 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9616b646-f7d3-407a-9b78-ebb52861f5ac
	I0531 19:08:50.295223   71907 round_trippers.go:580]     Date: Wed, 31 May 2023 19:08:50 GMT
	I0531 19:08:50.295236   71907 round_trippers.go:580]     Audit-Id: 3b8577e6-115d-47c3-b9a1-3318cf84dea8
	I0531 19:08:50.295244   71907 round_trippers.go:580]     Cache-Control: no-cache, private
	I0531 19:08:50.295255   71907 round_trippers.go:580]     Content-Type: application/json
	I0531 19:08:50.295276   71907 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61ef383b-79fe-44e8-8f9c-d13c8cd9f602
	I0531 19:08:50.295761   71907 request.go:1188] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"454"},"items":[{"metadata":{"name":"coredns-5d78c9869d-hhw4h","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"3f927f21-c6c7-43a2-a635-4fd3c672172d","resourceVersion":"442","creationTimestamp":"2023-05-31T19:08:43Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"463ad278-b9f3-4d58-8542-8ec925fef61a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-05-31T19:08:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"463ad278-b9f3-4d58-8542-8ec925fef61a\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 55611 chars]
	I0531 19:08:50.298180   71907 system_pods.go:86] 8 kube-system pods found
	I0531 19:08:50.298206   71907 system_pods.go:89] "coredns-5d78c9869d-hhw4h" [3f927f21-c6c7-43a2-a635-4fd3c672172d] Running
	I0531 19:08:50.298214   71907 system_pods.go:89] "etcd-multinode-025078" [ae9f84a1-9fff-46b4-b27d-4459cba13a8b] Running
	I0531 19:08:50.298221   71907 system_pods.go:89] "kindnet-556pq" [9385291d-39c2-4506-8a97-e8e9f080feb1] Running
	I0531 19:08:50.298234   71907 system_pods.go:89] "kube-apiserver-multinode-025078" [c0951b5b-7914-475e-93d5-2b8513832b1e] Running
	I0531 19:08:50.298240   71907 system_pods.go:89] "kube-controller-manager-multinode-025078" [5c741586-53ad-4e55-9aba-d0f8355f2eec] Running
	I0531 19:08:50.298245   71907 system_pods.go:89] "kube-proxy-ws8xb" [d0bcd8bd-2828-4ad8-affa-aa8fb8b01b14] Running
	I0531 19:08:50.298254   71907 system_pods.go:89] "kube-scheduler-multinode-025078" [72981a73-7d31-416c-a55e-9e619fd02ad5] Running
	I0531 19:08:50.298260   71907 system_pods.go:89] "storage-provisioner" [e44bd6e4-ee1c-488f-9b89-90ae9b5880f8] Running
	I0531 19:08:50.298269   71907 system_pods.go:126] duration metric: took 204.426109ms to wait for k8s-apps to be running ...
	I0531 19:08:50.298281   71907 system_svc.go:44] waiting for kubelet service to be running ....
	I0531 19:08:50.298338   71907 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0531 19:08:50.311577   71907 system_svc.go:56] duration metric: took 13.289353ms WaitForService to wait for kubelet.
	I0531 19:08:50.311636   71907 kubeadm.go:581] duration metric: took 6.865299911s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0531 19:08:50.311662   71907 node_conditions.go:102] verifying NodePressure condition ...
	I0531 19:08:50.491025   71907 request.go:628] Waited for 179.298098ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes
	I0531 19:08:50.491088   71907 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes
	I0531 19:08:50.491094   71907 round_trippers.go:469] Request Headers:
	I0531 19:08:50.491104   71907 round_trippers.go:473]     Accept: application/json, */*
	I0531 19:08:50.491112   71907 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0531 19:08:50.493719   71907 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0531 19:08:50.493742   71907 round_trippers.go:577] Response Headers:
	I0531 19:08:50.493760   71907 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9616b646-f7d3-407a-9b78-ebb52861f5ac
	I0531 19:08:50.493768   71907 round_trippers.go:580]     Date: Wed, 31 May 2023 19:08:50 GMT
	I0531 19:08:50.493776   71907 round_trippers.go:580]     Audit-Id: 4eed9d0e-5824-40e9-98a3-d8f127073840
	I0531 19:08:50.493787   71907 round_trippers.go:580]     Cache-Control: no-cache, private
	I0531 19:08:50.493799   71907 round_trippers.go:580]     Content-Type: application/json
	I0531 19:08:50.493806   71907 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61ef383b-79fe-44e8-8f9c-d13c8cd9f602
	I0531 19:08:50.493956   71907 request.go:1188] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"454"},"items":[{"metadata":{"name":"multinode-025078","uid":"527e40a2-2af7-4de3-b462-a471bac33c44","resourceVersion":"421","creationTimestamp":"2023-05-31T19:08:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-025078","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7022875d4a054c2d518e5e5a7b9d500799d50140","minikube.k8s.io/name":"multinode-025078","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_05_31T19_08_30_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields
":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":" [truncated 6082 chars]
	I0531 19:08:50.494429   71907 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0531 19:08:50.494451   71907 node_conditions.go:123] node cpu capacity is 2
	I0531 19:08:50.494464   71907 node_conditions.go:105] duration metric: took 182.797325ms to run NodePressure ...
	I0531 19:08:50.494478   71907 start.go:228] waiting for startup goroutines ...
	I0531 19:08:50.494485   71907 start.go:233] waiting for cluster config update ...
	I0531 19:08:50.494517   71907 start.go:242] writing updated cluster config ...
	I0531 19:08:50.497029   71907 out.go:177] 
	I0531 19:08:50.498806   71907 config.go:182] Loaded profile config "multinode-025078": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.2
	I0531 19:08:50.498913   71907 profile.go:148] Saving config to /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/multinode-025078/config.json ...
	I0531 19:08:50.501221   71907 out.go:177] * Starting worker node multinode-025078-m02 in cluster multinode-025078
	I0531 19:08:50.502623   71907 cache.go:122] Beginning downloading kic base image for docker with crio
	I0531 19:08:50.504433   71907 out.go:177] * Pulling base image ...
	I0531 19:08:50.506370   71907 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime crio
	I0531 19:08:50.506401   71907 cache.go:57] Caching tarball of preloaded images
	I0531 19:08:50.506439   71907 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 in local docker daemon
	I0531 19:08:50.506570   71907 preload.go:174] Found /home/jenkins/minikube-integration/16569-2389/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0531 19:08:50.506602   71907 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.2 on crio
	I0531 19:08:50.506756   71907 profile.go:148] Saving config to /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/multinode-025078/config.json ...
	I0531 19:08:50.526381   71907 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 in local docker daemon, skipping pull
	I0531 19:08:50.526408   71907 cache.go:145] gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 exists in daemon, skipping load
	I0531 19:08:50.526429   71907 cache.go:195] Successfully downloaded all kic artifacts
	I0531 19:08:50.526455   71907 start.go:364] acquiring machines lock for multinode-025078-m02: {Name:mk6733f9ae838c5c40a5a77c88fca692ea00f9e7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0531 19:08:50.527081   71907 start.go:368] acquired machines lock for "multinode-025078-m02" in 596.902µs
	I0531 19:08:50.527114   71907 start.go:93] Provisioning new machine with config: &{Name:multinode-025078 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:multinode-025078 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:0 KubernetesVersion:v1.27.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mou
nt9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:} &{Name:m02 IP: Port:0 KubernetesVersion:v1.27.2 ContainerRuntime:crio ControlPlane:false Worker:true}
	I0531 19:08:50.527201   71907 start.go:125] createHost starting for "m02" (driver="docker")
	I0531 19:08:50.529678   71907 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0531 19:08:50.529790   71907 start.go:159] libmachine.API.Create for "multinode-025078" (driver="docker")
	I0531 19:08:50.529819   71907 client.go:168] LocalClient.Create starting
	I0531 19:08:50.529881   71907 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/16569-2389/.minikube/certs/ca.pem
	I0531 19:08:50.529916   71907 main.go:141] libmachine: Decoding PEM data...
	I0531 19:08:50.529931   71907 main.go:141] libmachine: Parsing certificate...
	I0531 19:08:50.529983   71907 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/16569-2389/.minikube/certs/cert.pem
	I0531 19:08:50.530000   71907 main.go:141] libmachine: Decoding PEM data...
	I0531 19:08:50.530010   71907 main.go:141] libmachine: Parsing certificate...
	I0531 19:08:50.530247   71907 cli_runner.go:164] Run: docker network inspect multinode-025078 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0531 19:08:50.547142   71907 network_create.go:76] Found existing network {name:multinode-025078 subnet:0x4000d28a50 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 58 1] mtu:1500}
	I0531 19:08:50.547188   71907 kic.go:117] calculated static IP "192.168.58.3" for the "multinode-025078-m02" container
	I0531 19:08:50.547256   71907 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0531 19:08:50.563713   71907 cli_runner.go:164] Run: docker volume create multinode-025078-m02 --label name.minikube.sigs.k8s.io=multinode-025078-m02 --label created_by.minikube.sigs.k8s.io=true
	I0531 19:08:50.587495   71907 oci.go:103] Successfully created a docker volume multinode-025078-m02
	I0531 19:08:50.587581   71907 cli_runner.go:164] Run: docker run --rm --name multinode-025078-m02-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-025078-m02 --entrypoint /usr/bin/test -v multinode-025078-m02:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 -d /var/lib
	I0531 19:08:51.153789   71907 oci.go:107] Successfully prepared a docker volume multinode-025078-m02
	I0531 19:08:51.153839   71907 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime crio
	I0531 19:08:51.153859   71907 kic.go:190] Starting extracting preloaded images to volume ...
	I0531 19:08:51.153944   71907 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/16569-2389/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v multinode-025078-m02:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 -I lz4 -xf /preloaded.tar -C /extractDir
	I0531 19:08:55.357169   71907 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/16569-2389/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v multinode-025078-m02:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 -I lz4 -xf /preloaded.tar -C /extractDir: (4.203185195s)
	I0531 19:08:55.357200   71907 kic.go:199] duration metric: took 4.203338 seconds to extract preloaded images to volume
	W0531 19:08:55.357350   71907 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0531 19:08:55.357456   71907 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0531 19:08:55.419260   71907 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname multinode-025078-m02 --name multinode-025078-m02 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-025078-m02 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=multinode-025078-m02 --network multinode-025078 --ip 192.168.58.3 --volume multinode-025078-m02:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8
	I0531 19:08:55.771294   71907 cli_runner.go:164] Run: docker container inspect multinode-025078-m02 --format={{.State.Running}}
	I0531 19:08:55.797374   71907 cli_runner.go:164] Run: docker container inspect multinode-025078-m02 --format={{.State.Status}}
	I0531 19:08:55.824976   71907 cli_runner.go:164] Run: docker exec multinode-025078-m02 stat /var/lib/dpkg/alternatives/iptables
	I0531 19:08:55.892045   71907 oci.go:144] the created container "multinode-025078-m02" has a running status.
	I0531 19:08:55.892071   71907 kic.go:221] Creating ssh key for kic: /home/jenkins/minikube-integration/16569-2389/.minikube/machines/multinode-025078-m02/id_rsa...
	I0531 19:08:56.426801   71907 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16569-2389/.minikube/machines/multinode-025078-m02/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0531 19:08:56.426845   71907 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/16569-2389/.minikube/machines/multinode-025078-m02/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0531 19:08:56.465212   71907 cli_runner.go:164] Run: docker container inspect multinode-025078-m02 --format={{.State.Status}}
	I0531 19:08:56.498901   71907 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0531 19:08:56.498926   71907 kic_runner.go:114] Args: [docker exec --privileged multinode-025078-m02 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0531 19:08:56.616631   71907 cli_runner.go:164] Run: docker container inspect multinode-025078-m02 --format={{.State.Status}}
	I0531 19:08:56.642114   71907 machine.go:88] provisioning docker machine ...
	I0531 19:08:56.642141   71907 ubuntu.go:169] provisioning hostname "multinode-025078-m02"
	I0531 19:08:56.642295   71907 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-025078-m02
	I0531 19:08:56.674332   71907 main.go:141] libmachine: Using SSH client type: native
	I0531 19:08:56.675179   71907 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x39eaa0] 0x3a1430 <nil>  [] 0s} 127.0.0.1 32852 <nil> <nil>}
	I0531 19:08:56.675199   71907 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-025078-m02 && echo "multinode-025078-m02" | sudo tee /etc/hostname
	I0531 19:08:56.918362   71907 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-025078-m02
	
	I0531 19:08:56.918481   71907 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-025078-m02
	I0531 19:08:56.939751   71907 main.go:141] libmachine: Using SSH client type: native
	I0531 19:08:56.940195   71907 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x39eaa0] 0x3a1430 <nil>  [] 0s} 127.0.0.1 32852 <nil> <nil>}
	I0531 19:08:56.940215   71907 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-025078-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-025078-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-025078-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0531 19:08:57.092274   71907 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0531 19:08:57.092300   71907 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/16569-2389/.minikube CaCertPath:/home/jenkins/minikube-integration/16569-2389/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16569-2389/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16569-2389/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16569-2389/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16569-2389/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16569-2389/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16569-2389/.minikube}
	I0531 19:08:57.092316   71907 ubuntu.go:177] setting up certificates
	I0531 19:08:57.092327   71907 provision.go:83] configureAuth start
	I0531 19:08:57.092396   71907 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-025078-m02
	I0531 19:08:57.120805   71907 provision.go:138] copyHostCerts
	I0531 19:08:57.120858   71907 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16569-2389/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/16569-2389/.minikube/ca.pem
	I0531 19:08:57.120907   71907 exec_runner.go:144] found /home/jenkins/minikube-integration/16569-2389/.minikube/ca.pem, removing ...
	I0531 19:08:57.120919   71907 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16569-2389/.minikube/ca.pem
	I0531 19:08:57.121000   71907 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16569-2389/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16569-2389/.minikube/ca.pem (1078 bytes)
	I0531 19:08:57.121084   71907 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16569-2389/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/16569-2389/.minikube/cert.pem
	I0531 19:08:57.121115   71907 exec_runner.go:144] found /home/jenkins/minikube-integration/16569-2389/.minikube/cert.pem, removing ...
	I0531 19:08:57.121124   71907 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16569-2389/.minikube/cert.pem
	I0531 19:08:57.121152   71907 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16569-2389/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16569-2389/.minikube/cert.pem (1123 bytes)
	I0531 19:08:57.121196   71907 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16569-2389/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/16569-2389/.minikube/key.pem
	I0531 19:08:57.121215   71907 exec_runner.go:144] found /home/jenkins/minikube-integration/16569-2389/.minikube/key.pem, removing ...
	I0531 19:08:57.121222   71907 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16569-2389/.minikube/key.pem
	I0531 19:08:57.121246   71907 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16569-2389/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16569-2389/.minikube/key.pem (1679 bytes)
	I0531 19:08:57.121294   71907 provision.go:112] generating server cert: /home/jenkins/minikube-integration/16569-2389/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16569-2389/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16569-2389/.minikube/certs/ca-key.pem org=jenkins.multinode-025078-m02 san=[192.168.58.3 127.0.0.1 localhost 127.0.0.1 minikube multinode-025078-m02]
	I0531 19:08:57.580953   71907 provision.go:172] copyRemoteCerts
	I0531 19:08:57.581021   71907 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0531 19:08:57.581063   71907 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-025078-m02
	I0531 19:08:57.599569   71907 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32852 SSHKeyPath:/home/jenkins/minikube-integration/16569-2389/.minikube/machines/multinode-025078-m02/id_rsa Username:docker}
	I0531 19:08:57.693642   71907 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16569-2389/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0531 19:08:57.693705   71907 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16569-2389/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0531 19:08:57.726549   71907 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16569-2389/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0531 19:08:57.726624   71907 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16569-2389/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I0531 19:08:57.758967   71907 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16569-2389/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0531 19:08:57.759073   71907 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16569-2389/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0531 19:08:57.788413   71907 provision.go:86] duration metric: configureAuth took 696.073345ms
	I0531 19:08:57.788436   71907 ubuntu.go:193] setting minikube options for container-runtime
	I0531 19:08:57.788623   71907 config.go:182] Loaded profile config "multinode-025078": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.2
	I0531 19:08:57.788732   71907 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-025078-m02
	I0531 19:08:57.806988   71907 main.go:141] libmachine: Using SSH client type: native
	I0531 19:08:57.807503   71907 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x39eaa0] 0x3a1430 <nil>  [] 0s} 127.0.0.1 32852 <nil> <nil>}
	I0531 19:08:57.807524   71907 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0531 19:08:58.057789   71907 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0531 19:08:58.057808   71907 machine.go:91] provisioned docker machine in 1.415677488s
	I0531 19:08:58.057818   71907 client.go:171] LocalClient.Create took 7.52799027s
	I0531 19:08:58.057830   71907 start.go:167] duration metric: libmachine.API.Create for "multinode-025078" took 7.528040468s
	I0531 19:08:58.057836   71907 start.go:300] post-start starting for "multinode-025078-m02" (driver="docker")
	I0531 19:08:58.057842   71907 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0531 19:08:58.057903   71907 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0531 19:08:58.057967   71907 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-025078-m02
	I0531 19:08:58.076547   71907 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32852 SSHKeyPath:/home/jenkins/minikube-integration/16569-2389/.minikube/machines/multinode-025078-m02/id_rsa Username:docker}
	I0531 19:08:58.169605   71907 ssh_runner.go:195] Run: cat /etc/os-release
	I0531 19:08:58.173591   71907 command_runner.go:130] > PRETTY_NAME="Ubuntu 22.04.2 LTS"
	I0531 19:08:58.173610   71907 command_runner.go:130] > NAME="Ubuntu"
	I0531 19:08:58.173618   71907 command_runner.go:130] > VERSION_ID="22.04"
	I0531 19:08:58.173634   71907 command_runner.go:130] > VERSION="22.04.2 LTS (Jammy Jellyfish)"
	I0531 19:08:58.173640   71907 command_runner.go:130] > VERSION_CODENAME=jammy
	I0531 19:08:58.173645   71907 command_runner.go:130] > ID=ubuntu
	I0531 19:08:58.173649   71907 command_runner.go:130] > ID_LIKE=debian
	I0531 19:08:58.173655   71907 command_runner.go:130] > HOME_URL="https://www.ubuntu.com/"
	I0531 19:08:58.173661   71907 command_runner.go:130] > SUPPORT_URL="https://help.ubuntu.com/"
	I0531 19:08:58.173678   71907 command_runner.go:130] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
	I0531 19:08:58.173690   71907 command_runner.go:130] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
	I0531 19:08:58.173695   71907 command_runner.go:130] > UBUNTU_CODENAME=jammy
	I0531 19:08:58.173760   71907 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0531 19:08:58.173790   71907 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0531 19:08:58.173806   71907 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0531 19:08:58.173813   71907 info.go:137] Remote host: Ubuntu 22.04.2 LTS
	I0531 19:08:58.173827   71907 filesync.go:126] Scanning /home/jenkins/minikube-integration/16569-2389/.minikube/addons for local assets ...
	I0531 19:08:58.173881   71907 filesync.go:126] Scanning /home/jenkins/minikube-integration/16569-2389/.minikube/files for local assets ...
	I0531 19:08:58.173965   71907 filesync.go:149] local asset: /home/jenkins/minikube-integration/16569-2389/.minikube/files/etc/ssl/certs/78042.pem -> 78042.pem in /etc/ssl/certs
	I0531 19:08:58.173975   71907 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16569-2389/.minikube/files/etc/ssl/certs/78042.pem -> /etc/ssl/certs/78042.pem
	I0531 19:08:58.174081   71907 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0531 19:08:58.184502   71907 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16569-2389/.minikube/files/etc/ssl/certs/78042.pem --> /etc/ssl/certs/78042.pem (1708 bytes)
	I0531 19:08:58.212817   71907 start.go:303] post-start completed in 154.966857ms
	I0531 19:08:58.213204   71907 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-025078-m02
	I0531 19:08:58.231220   71907 profile.go:148] Saving config to /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/multinode-025078/config.json ...
	I0531 19:08:58.231504   71907 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0531 19:08:58.231554   71907 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-025078-m02
	I0531 19:08:58.248945   71907 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32852 SSHKeyPath:/home/jenkins/minikube-integration/16569-2389/.minikube/machines/multinode-025078-m02/id_rsa Username:docker}
	I0531 19:08:58.341277   71907 command_runner.go:130] > 10%!
	(MISSING)I0531 19:08:58.341374   71907 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0531 19:08:58.347058   71907 command_runner.go:130] > 175G
	I0531 19:08:58.347467   71907 start.go:128] duration metric: createHost completed in 7.820256355s
	I0531 19:08:58.347489   71907 start.go:83] releasing machines lock for "multinode-025078-m02", held for 7.820393526s
	I0531 19:08:58.347562   71907 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-025078-m02
	I0531 19:08:58.372495   71907 out.go:177] * Found network options:
	I0531 19:08:58.374065   71907 out.go:177]   - NO_PROXY=192.168.58.2
	W0531 19:08:58.375716   71907 proxy.go:119] fail to check proxy env: Error ip not in block
	W0531 19:08:58.375770   71907 proxy.go:119] fail to check proxy env: Error ip not in block
	I0531 19:08:58.375835   71907 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0531 19:08:58.375900   71907 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-025078-m02
	I0531 19:08:58.376166   71907 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0531 19:08:58.376218   71907 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-025078-m02
	I0531 19:08:58.396534   71907 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32852 SSHKeyPath:/home/jenkins/minikube-integration/16569-2389/.minikube/machines/multinode-025078-m02/id_rsa Username:docker}
	I0531 19:08:58.414409   71907 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32852 SSHKeyPath:/home/jenkins/minikube-integration/16569-2389/.minikube/machines/multinode-025078-m02/id_rsa Username:docker}
	I0531 19:08:58.638249   71907 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0531 19:08:58.660392   71907 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0531 19:08:58.660443   71907 command_runner.go:130] >   File: /etc/cni/net.d/200-loopback.conf
	I0531 19:08:58.660454   71907 command_runner.go:130] >   Size: 54        	Blocks: 8          IO Block: 4096   regular file
	I0531 19:08:58.660468   71907 command_runner.go:130] > Device: b3h/179d	Inode: 1302367     Links: 1
	I0531 19:08:58.660476   71907 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0531 19:08:58.660485   71907 command_runner.go:130] > Access: 2023-04-04 14:31:21.000000000 +0000
	I0531 19:08:58.660491   71907 command_runner.go:130] > Modify: 2023-04-04 14:31:21.000000000 +0000
	I0531 19:08:58.660501   71907 command_runner.go:130] > Change: 2023-05-31 18:44:35.221132349 +0000
	I0531 19:08:58.660507   71907 command_runner.go:130] >  Birth: 2023-05-31 18:44:35.221132349 +0000
	I0531 19:08:58.660580   71907 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0531 19:08:58.685424   71907 cni.go:220] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0531 19:08:58.685570   71907 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0531 19:08:58.723598   71907 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf, 
	I0531 19:08:58.723728   71907 cni.go:261] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0531 19:08:58.723768   71907 start.go:481] detecting cgroup driver to use...
	I0531 19:08:58.723817   71907 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0531 19:08:58.723888   71907 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0531 19:08:58.742677   71907 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0531 19:08:58.756146   71907 docker.go:193] disabling cri-docker service (if available) ...
	I0531 19:08:58.756209   71907 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0531 19:08:58.772189   71907 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0531 19:08:58.788970   71907 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0531 19:08:58.886601   71907 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0531 19:08:58.987519   71907 command_runner.go:130] ! Created symlink /etc/systemd/system/cri-docker.service → /dev/null.
	I0531 19:08:58.987629   71907 docker.go:209] disabling docker service ...
	I0531 19:08:58.987700   71907 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0531 19:08:59.009252   71907 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0531 19:08:59.024085   71907 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0531 19:08:59.119977   71907 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/docker.socket.
	I0531 19:08:59.120056   71907 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0531 19:08:59.229779   71907 command_runner.go:130] ! Created symlink /etc/systemd/system/docker.service → /dev/null.
	I0531 19:08:59.229851   71907 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0531 19:08:59.244636   71907 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0531 19:08:59.265482   71907 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0531 19:08:59.267287   71907 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0531 19:08:59.267356   71907 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0531 19:08:59.282152   71907 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0531 19:08:59.282297   71907 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0531 19:08:59.299760   71907 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0531 19:08:59.312263   71907 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0531 19:08:59.324584   71907 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0531 19:08:59.336147   71907 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0531 19:08:59.345892   71907 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0531 19:08:59.346987   71907 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0531 19:08:59.357424   71907 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0531 19:08:59.458072   71907 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0531 19:08:59.587018   71907 start.go:528] Will wait 60s for socket path /var/run/crio/crio.sock
	I0531 19:08:59.587085   71907 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0531 19:08:59.591553   71907 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0531 19:08:59.591575   71907 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0531 19:08:59.591583   71907 command_runner.go:130] > Device: bdh/189d	Inode: 186         Links: 1
	I0531 19:08:59.591591   71907 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0531 19:08:59.591597   71907 command_runner.go:130] > Access: 2023-05-31 19:08:59.563275356 +0000
	I0531 19:08:59.591603   71907 command_runner.go:130] > Modify: 2023-05-31 19:08:59.563275356 +0000
	I0531 19:08:59.591609   71907 command_runner.go:130] > Change: 2023-05-31 19:08:59.563275356 +0000
	I0531 19:08:59.591613   71907 command_runner.go:130] >  Birth: -
	I0531 19:08:59.591685   71907 start.go:549] Will wait 60s for crictl version
	I0531 19:08:59.591736   71907 ssh_runner.go:195] Run: which crictl
	I0531 19:08:59.595816   71907 command_runner.go:130] > /usr/bin/crictl
	I0531 19:08:59.596251   71907 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0531 19:08:59.635899   71907 command_runner.go:130] > Version:  0.1.0
	I0531 19:08:59.636014   71907 command_runner.go:130] > RuntimeName:  cri-o
	I0531 19:08:59.636180   71907 command_runner.go:130] > RuntimeVersion:  1.24.5
	I0531 19:08:59.636355   71907 command_runner.go:130] > RuntimeApiVersion:  v1
	I0531 19:08:59.639314   71907 start.go:565] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.5
	RuntimeApiVersion:  v1
	I0531 19:08:59.639400   71907 ssh_runner.go:195] Run: crio --version
	I0531 19:08:59.685283   71907 command_runner.go:130] > crio version 1.24.5
	I0531 19:08:59.685342   71907 command_runner.go:130] > Version:          1.24.5
	I0531 19:08:59.685366   71907 command_runner.go:130] > GitCommit:        b007cb6753d97de6218787b6894b0e3cc1dc8ecd
	I0531 19:08:59.685386   71907 command_runner.go:130] > GitTreeState:     clean
	I0531 19:08:59.685419   71907 command_runner.go:130] > BuildDate:        2023-04-04T14:31:22Z
	I0531 19:08:59.685444   71907 command_runner.go:130] > GoVersion:        go1.18.2
	I0531 19:08:59.685465   71907 command_runner.go:130] > Compiler:         gc
	I0531 19:08:59.685485   71907 command_runner.go:130] > Platform:         linux/arm64
	I0531 19:08:59.685520   71907 command_runner.go:130] > Linkmode:         dynamic
	I0531 19:08:59.685557   71907 command_runner.go:130] > BuildTags:        apparmor, exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0531 19:08:59.685577   71907 command_runner.go:130] > SeccompEnabled:   true
	I0531 19:08:59.685595   71907 command_runner.go:130] > AppArmorEnabled:  false
	I0531 19:08:59.687237   71907 ssh_runner.go:195] Run: crio --version
	I0531 19:08:59.732129   71907 command_runner.go:130] > crio version 1.24.5
	I0531 19:08:59.732147   71907 command_runner.go:130] > Version:          1.24.5
	I0531 19:08:59.732155   71907 command_runner.go:130] > GitCommit:        b007cb6753d97de6218787b6894b0e3cc1dc8ecd
	I0531 19:08:59.732161   71907 command_runner.go:130] > GitTreeState:     clean
	I0531 19:08:59.732167   71907 command_runner.go:130] > BuildDate:        2023-04-04T14:31:22Z
	I0531 19:08:59.732173   71907 command_runner.go:130] > GoVersion:        go1.18.2
	I0531 19:08:59.732177   71907 command_runner.go:130] > Compiler:         gc
	I0531 19:08:59.732184   71907 command_runner.go:130] > Platform:         linux/arm64
	I0531 19:08:59.732190   71907 command_runner.go:130] > Linkmode:         dynamic
	I0531 19:08:59.732199   71907 command_runner.go:130] > BuildTags:        apparmor, exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0531 19:08:59.732204   71907 command_runner.go:130] > SeccompEnabled:   true
	I0531 19:08:59.732218   71907 command_runner.go:130] > AppArmorEnabled:  false
	I0531 19:08:59.737310   71907 out.go:177] * Preparing Kubernetes v1.27.2 on CRI-O 1.24.5 ...
	I0531 19:08:59.739157   71907 out.go:177]   - env NO_PROXY=192.168.58.2
	I0531 19:08:59.741125   71907 cli_runner.go:164] Run: docker network inspect multinode-025078 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0531 19:08:59.763761   71907 ssh_runner.go:195] Run: grep 192.168.58.1	host.minikube.internal$ /etc/hosts
	I0531 19:08:59.768482   71907 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.58.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0531 19:08:59.781941   71907 certs.go:56] Setting up /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/multinode-025078 for IP: 192.168.58.3
	I0531 19:08:59.781973   71907 certs.go:190] acquiring lock for shared ca certs: {Name:mk0147accf8b8da231d39646bdc89fced67451cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 19:08:59.782107   71907 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/16569-2389/.minikube/ca.key
	I0531 19:08:59.782153   71907 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/16569-2389/.minikube/proxy-client-ca.key
	I0531 19:08:59.782168   71907 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16569-2389/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0531 19:08:59.782182   71907 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16569-2389/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0531 19:08:59.782197   71907 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16569-2389/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0531 19:08:59.782209   71907 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16569-2389/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0531 19:08:59.782264   71907 certs.go:437] found cert: /home/jenkins/minikube-integration/16569-2389/.minikube/certs/home/jenkins/minikube-integration/16569-2389/.minikube/certs/7804.pem (1338 bytes)
	W0531 19:08:59.782298   71907 certs.go:433] ignoring /home/jenkins/minikube-integration/16569-2389/.minikube/certs/home/jenkins/minikube-integration/16569-2389/.minikube/certs/7804_empty.pem, impossibly tiny 0 bytes
	I0531 19:08:59.782314   71907 certs.go:437] found cert: /home/jenkins/minikube-integration/16569-2389/.minikube/certs/home/jenkins/minikube-integration/16569-2389/.minikube/certs/ca-key.pem (1675 bytes)
	I0531 19:08:59.782343   71907 certs.go:437] found cert: /home/jenkins/minikube-integration/16569-2389/.minikube/certs/home/jenkins/minikube-integration/16569-2389/.minikube/certs/ca.pem (1078 bytes)
	I0531 19:08:59.782375   71907 certs.go:437] found cert: /home/jenkins/minikube-integration/16569-2389/.minikube/certs/home/jenkins/minikube-integration/16569-2389/.minikube/certs/cert.pem (1123 bytes)
	I0531 19:08:59.782407   71907 certs.go:437] found cert: /home/jenkins/minikube-integration/16569-2389/.minikube/certs/home/jenkins/minikube-integration/16569-2389/.minikube/certs/key.pem (1679 bytes)
	I0531 19:08:59.782456   71907 certs.go:437] found cert: /home/jenkins/minikube-integration/16569-2389/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/16569-2389/.minikube/files/etc/ssl/certs/78042.pem (1708 bytes)
	I0531 19:08:59.782490   71907 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16569-2389/.minikube/files/etc/ssl/certs/78042.pem -> /usr/share/ca-certificates/78042.pem
	I0531 19:08:59.782506   71907 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16569-2389/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0531 19:08:59.782517   71907 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16569-2389/.minikube/certs/7804.pem -> /usr/share/ca-certificates/7804.pem
	I0531 19:08:59.782907   71907 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16569-2389/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0531 19:08:59.813069   71907 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16569-2389/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0531 19:08:59.841590   71907 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16569-2389/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0531 19:08:59.872334   71907 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16569-2389/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0531 19:08:59.901468   71907 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16569-2389/.minikube/files/etc/ssl/certs/78042.pem --> /usr/share/ca-certificates/78042.pem (1708 bytes)
	I0531 19:08:59.931106   71907 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16569-2389/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0531 19:08:59.961180   71907 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16569-2389/.minikube/certs/7804.pem --> /usr/share/ca-certificates/7804.pem (1338 bytes)
	I0531 19:08:59.990714   71907 ssh_runner.go:195] Run: openssl version
	I0531 19:08:59.997429   71907 command_runner.go:130] > OpenSSL 3.0.2 15 Mar 2022 (Library: OpenSSL 3.0.2 15 Mar 2022)
	I0531 19:08:59.997878   71907 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/78042.pem && ln -fs /usr/share/ca-certificates/78042.pem /etc/ssl/certs/78042.pem"
	I0531 19:09:00.023119   71907 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/78042.pem
	I0531 19:09:00.049986   71907 command_runner.go:130] > -rw-r--r-- 1 root root 1708 May 31 18:52 /usr/share/ca-certificates/78042.pem
	I0531 19:09:00.050046   71907 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 May 31 18:52 /usr/share/ca-certificates/78042.pem
	I0531 19:09:00.050112   71907 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/78042.pem
	I0531 19:09:00.071222   71907 command_runner.go:130] > 3ec20f2e
	I0531 19:09:00.071397   71907 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/78042.pem /etc/ssl/certs/3ec20f2e.0"
	I0531 19:09:00.088684   71907 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0531 19:09:00.103687   71907 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0531 19:09:00.109390   71907 command_runner.go:130] > -rw-r--r-- 1 root root 1111 May 31 18:45 /usr/share/ca-certificates/minikubeCA.pem
	I0531 19:09:00.109480   71907 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 May 31 18:45 /usr/share/ca-certificates/minikubeCA.pem
	I0531 19:09:00.109593   71907 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0531 19:09:00.119330   71907 command_runner.go:130] > b5213941
	I0531 19:09:00.120054   71907 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0531 19:09:00.134375   71907 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7804.pem && ln -fs /usr/share/ca-certificates/7804.pem /etc/ssl/certs/7804.pem"
	I0531 19:09:00.148403   71907 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7804.pem
	I0531 19:09:00.153418   71907 command_runner.go:130] > -rw-r--r-- 1 root root 1338 May 31 18:52 /usr/share/ca-certificates/7804.pem
	I0531 19:09:00.153472   71907 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 May 31 18:52 /usr/share/ca-certificates/7804.pem
	I0531 19:09:00.153541   71907 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7804.pem
	I0531 19:09:00.162375   71907 command_runner.go:130] > 51391683
	I0531 19:09:00.162882   71907 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7804.pem /etc/ssl/certs/51391683.0"
	I0531 19:09:00.175715   71907 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0531 19:09:00.180568   71907 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0531 19:09:00.180613   71907 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0531 19:09:00.180713   71907 ssh_runner.go:195] Run: crio config
	I0531 19:09:00.236873   71907 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0531 19:09:00.236899   71907 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0531 19:09:00.236908   71907 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0531 19:09:00.236912   71907 command_runner.go:130] > #
	I0531 19:09:00.236921   71907 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0531 19:09:00.236929   71907 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0531 19:09:00.236938   71907 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0531 19:09:00.236947   71907 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0531 19:09:00.236956   71907 command_runner.go:130] > # reload'.
	I0531 19:09:00.236964   71907 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0531 19:09:00.236977   71907 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0531 19:09:00.236986   71907 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0531 19:09:00.236996   71907 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0531 19:09:00.237004   71907 command_runner.go:130] > [crio]
	I0531 19:09:00.237013   71907 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0531 19:09:00.237022   71907 command_runner.go:130] > # containers images, in this directory.
	I0531 19:09:00.237031   71907 command_runner.go:130] > # root = "/home/docker/.local/share/containers/storage"
	I0531 19:09:00.237043   71907 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0531 19:09:00.237050   71907 command_runner.go:130] > # runroot = "/tmp/containers-user-1000/containers"
	I0531 19:09:00.237061   71907 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0531 19:09:00.237068   71907 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0531 19:09:00.237077   71907 command_runner.go:130] > # storage_driver = "vfs"
	I0531 19:09:00.237085   71907 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0531 19:09:00.237096   71907 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0531 19:09:00.237103   71907 command_runner.go:130] > # storage_option = [
	I0531 19:09:00.237107   71907 command_runner.go:130] > # ]
	I0531 19:09:00.237115   71907 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0531 19:09:00.237125   71907 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0531 19:09:00.237131   71907 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0531 19:09:00.237140   71907 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0531 19:09:00.237148   71907 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0531 19:09:00.237159   71907 command_runner.go:130] > # always happen on a node reboot
	I0531 19:09:00.237168   71907 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0531 19:09:00.237178   71907 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0531 19:09:00.237190   71907 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0531 19:09:00.237200   71907 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0531 19:09:00.237211   71907 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I0531 19:09:00.237221   71907 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0531 19:09:00.237237   71907 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0531 19:09:00.237245   71907 command_runner.go:130] > # internal_wipe = true
	I0531 19:09:00.237254   71907 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0531 19:09:00.237265   71907 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0531 19:09:00.237275   71907 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0531 19:09:00.237282   71907 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0531 19:09:00.237291   71907 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0531 19:09:00.237296   71907 command_runner.go:130] > [crio.api]
	I0531 19:09:00.237304   71907 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0531 19:09:00.237313   71907 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0531 19:09:00.237320   71907 command_runner.go:130] > # IP address on which the stream server will listen.
	I0531 19:09:00.237329   71907 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0531 19:09:00.237339   71907 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0531 19:09:00.237348   71907 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0531 19:09:00.237356   71907 command_runner.go:130] > # stream_port = "0"
	I0531 19:09:00.237363   71907 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0531 19:09:00.237371   71907 command_runner.go:130] > # stream_enable_tls = false
	I0531 19:09:00.237379   71907 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0531 19:09:00.237387   71907 command_runner.go:130] > # stream_idle_timeout = ""
	I0531 19:09:00.237395   71907 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0531 19:09:00.237406   71907 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0531 19:09:00.237413   71907 command_runner.go:130] > # minutes.
	I0531 19:09:00.237419   71907 command_runner.go:130] > # stream_tls_cert = ""
	I0531 19:09:00.237431   71907 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0531 19:09:00.237442   71907 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0531 19:09:00.237447   71907 command_runner.go:130] > # stream_tls_key = ""
	I0531 19:09:00.237457   71907 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0531 19:09:00.237464   71907 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0531 19:09:00.237474   71907 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0531 19:09:00.237480   71907 command_runner.go:130] > # stream_tls_ca = ""
	I0531 19:09:00.237492   71907 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I0531 19:09:00.237501   71907 command_runner.go:130] > # grpc_max_send_msg_size = 83886080
	I0531 19:09:00.237514   71907 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I0531 19:09:00.237523   71907 command_runner.go:130] > # grpc_max_recv_msg_size = 83886080
	I0531 19:09:00.237560   71907 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0531 19:09:00.237572   71907 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0531 19:09:00.237577   71907 command_runner.go:130] > [crio.runtime]
	I0531 19:09:00.237588   71907 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0531 19:09:00.237599   71907 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0531 19:09:00.237604   71907 command_runner.go:130] > # "nofile=1024:2048"
	I0531 19:09:00.237612   71907 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0531 19:09:00.237619   71907 command_runner.go:130] > # default_ulimits = [
	I0531 19:09:00.237623   71907 command_runner.go:130] > # ]
	I0531 19:09:00.237632   71907 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0531 19:09:00.237642   71907 command_runner.go:130] > # no_pivot = false
	I0531 19:09:00.237649   71907 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0531 19:09:00.237661   71907 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0531 19:09:00.237668   71907 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0531 19:09:00.237678   71907 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0531 19:09:00.237687   71907 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0531 19:09:00.237700   71907 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0531 19:09:00.237705   71907 command_runner.go:130] > # conmon = ""
	I0531 19:09:00.237712   71907 command_runner.go:130] > # Cgroup setting for conmon
	I0531 19:09:00.237722   71907 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0531 19:09:00.237730   71907 command_runner.go:130] > conmon_cgroup = "pod"
	I0531 19:09:00.237738   71907 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0531 19:09:00.237748   71907 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0531 19:09:00.237757   71907 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0531 19:09:00.237766   71907 command_runner.go:130] > # conmon_env = [
	I0531 19:09:00.237771   71907 command_runner.go:130] > # ]
	I0531 19:09:00.237782   71907 command_runner.go:130] > # Additional environment variables to set for all the
	I0531 19:09:00.237792   71907 command_runner.go:130] > # containers. These are overridden if set in the
	I0531 19:09:00.237800   71907 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0531 19:09:00.237807   71907 command_runner.go:130] > # default_env = [
	I0531 19:09:00.237812   71907 command_runner.go:130] > # ]
	I0531 19:09:00.237822   71907 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0531 19:09:00.237831   71907 command_runner.go:130] > # selinux = false
	I0531 19:09:00.237840   71907 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0531 19:09:00.237850   71907 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0531 19:09:00.237858   71907 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0531 19:09:00.237866   71907 command_runner.go:130] > # seccomp_profile = ""
	I0531 19:09:00.237874   71907 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0531 19:09:00.237884   71907 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0531 19:09:00.237896   71907 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0531 19:09:00.237902   71907 command_runner.go:130] > # which might increase security.
	I0531 19:09:00.237910   71907 command_runner.go:130] > # seccomp_use_default_when_empty = true
	I0531 19:09:00.237919   71907 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0531 19:09:00.237932   71907 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0531 19:09:00.237940   71907 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0531 19:09:00.237951   71907 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0531 19:09:00.237961   71907 command_runner.go:130] > # This option supports live configuration reload.
	I0531 19:09:00.237970   71907 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0531 19:09:00.237982   71907 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0531 19:09:00.237987   71907 command_runner.go:130] > # the cgroup blockio controller.
	I0531 19:09:00.237995   71907 command_runner.go:130] > # blockio_config_file = ""
	I0531 19:09:00.238006   71907 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0531 19:09:00.238016   71907 command_runner.go:130] > # irqbalance daemon.
	I0531 19:09:00.238023   71907 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0531 19:09:00.238034   71907 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0531 19:09:00.238044   71907 command_runner.go:130] > # This option supports live configuration reload.
	I0531 19:09:00.238052   71907 command_runner.go:130] > # rdt_config_file = ""
	I0531 19:09:00.238059   71907 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0531 19:09:00.238068   71907 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0531 19:09:00.238075   71907 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0531 19:09:00.238084   71907 command_runner.go:130] > # separate_pull_cgroup = ""
	I0531 19:09:00.238092   71907 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0531 19:09:00.238103   71907 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0531 19:09:00.238108   71907 command_runner.go:130] > # will be added.
	I0531 19:09:00.238117   71907 command_runner.go:130] > # default_capabilities = [
	I0531 19:09:00.238122   71907 command_runner.go:130] > # 	"CHOWN",
	I0531 19:09:00.238131   71907 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0531 19:09:00.238137   71907 command_runner.go:130] > # 	"FSETID",
	I0531 19:09:00.238144   71907 command_runner.go:130] > # 	"FOWNER",
	I0531 19:09:00.238149   71907 command_runner.go:130] > # 	"SETGID",
	I0531 19:09:00.238154   71907 command_runner.go:130] > # 	"SETUID",
	I0531 19:09:00.238161   71907 command_runner.go:130] > # 	"SETPCAP",
	I0531 19:09:00.238166   71907 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0531 19:09:00.238172   71907 command_runner.go:130] > # 	"KILL",
	I0531 19:09:00.238179   71907 command_runner.go:130] > # ]
	I0531 19:09:00.238188   71907 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0531 19:09:00.238200   71907 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0531 19:09:00.238206   71907 command_runner.go:130] > # add_inheritable_capabilities = true
	I0531 19:09:00.238217   71907 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0531 19:09:00.238229   71907 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0531 19:09:00.238237   71907 command_runner.go:130] > # default_sysctls = [
	I0531 19:09:00.238241   71907 command_runner.go:130] > # ]
	I0531 19:09:00.238247   71907 command_runner.go:130] > # List of devices on the host that a
	I0531 19:09:00.238257   71907 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0531 19:09:00.238262   71907 command_runner.go:130] > # allowed_devices = [
	I0531 19:09:00.238269   71907 command_runner.go:130] > # 	"/dev/fuse",
	I0531 19:09:00.238278   71907 command_runner.go:130] > # ]
	I0531 19:09:00.238285   71907 command_runner.go:130] > # List of additional devices. specified as
	I0531 19:09:00.238321   71907 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0531 19:09:00.238331   71907 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0531 19:09:00.238339   71907 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0531 19:09:00.238346   71907 command_runner.go:130] > # additional_devices = [
	I0531 19:09:00.238350   71907 command_runner.go:130] > # ]
	I0531 19:09:00.238362   71907 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0531 19:09:00.238367   71907 command_runner.go:130] > # cdi_spec_dirs = [
	I0531 19:09:00.238376   71907 command_runner.go:130] > # 	"/etc/cdi",
	I0531 19:09:00.238385   71907 command_runner.go:130] > # 	"/var/run/cdi",
	I0531 19:09:00.238389   71907 command_runner.go:130] > # ]
	I0531 19:09:00.238401   71907 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0531 19:09:00.238412   71907 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0531 19:09:00.238417   71907 command_runner.go:130] > # Defaults to false.
	I0531 19:09:00.238423   71907 command_runner.go:130] > # device_ownership_from_security_context = false
	I0531 19:09:00.238431   71907 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0531 19:09:00.238443   71907 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0531 19:09:00.238448   71907 command_runner.go:130] > # hooks_dir = [
	I0531 19:09:00.238458   71907 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0531 19:09:00.238462   71907 command_runner.go:130] > # ]
	I0531 19:09:00.238473   71907 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0531 19:09:00.238484   71907 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0531 19:09:00.238494   71907 command_runner.go:130] > # its default mounts from the following two files:
	I0531 19:09:00.238498   71907 command_runner.go:130] > #
	I0531 19:09:00.238506   71907 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0531 19:09:00.238519   71907 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0531 19:09:00.238526   71907 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0531 19:09:00.238534   71907 command_runner.go:130] > #
	I0531 19:09:00.238541   71907 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0531 19:09:00.238552   71907 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0531 19:09:00.238563   71907 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0531 19:09:00.238572   71907 command_runner.go:130] > #      only add mounts it finds in this file.
	I0531 19:09:00.238577   71907 command_runner.go:130] > #
	I0531 19:09:00.238582   71907 command_runner.go:130] > # default_mounts_file = ""
	I0531 19:09:00.238589   71907 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0531 19:09:00.238599   71907 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0531 19:09:00.238609   71907 command_runner.go:130] > # pids_limit = 0
	I0531 19:09:00.238617   71907 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0531 19:09:00.238628   71907 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0531 19:09:00.238639   71907 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0531 19:09:00.238652   71907 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0531 19:09:00.238660   71907 command_runner.go:130] > # log_size_max = -1
	I0531 19:09:00.238668   71907 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I0531 19:09:00.238674   71907 command_runner.go:130] > # log_to_journald = false
	I0531 19:09:00.238685   71907 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0531 19:09:00.238695   71907 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0531 19:09:00.238702   71907 command_runner.go:130] > # Path to directory for container attach sockets.
	I0531 19:09:00.238711   71907 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0531 19:09:00.238721   71907 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0531 19:09:00.238751   71907 command_runner.go:130] > # bind_mount_prefix = ""
	I0531 19:09:00.238760   71907 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0531 19:09:00.238765   71907 command_runner.go:130] > # read_only = false
	I0531 19:09:00.238774   71907 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0531 19:09:00.238786   71907 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0531 19:09:00.238792   71907 command_runner.go:130] > # live configuration reload.
	I0531 19:09:00.238800   71907 command_runner.go:130] > # log_level = "info"
	I0531 19:09:00.238807   71907 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0531 19:09:00.238817   71907 command_runner.go:130] > # This option supports live configuration reload.
	I0531 19:09:00.238825   71907 command_runner.go:130] > # log_filter = ""
	I0531 19:09:00.238833   71907 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0531 19:09:00.238844   71907 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0531 19:09:00.238849   71907 command_runner.go:130] > # separated by comma.
	I0531 19:09:00.238854   71907 command_runner.go:130] > # uid_mappings = ""
	I0531 19:09:00.238862   71907 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0531 19:09:00.238873   71907 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0531 19:09:00.238878   71907 command_runner.go:130] > # separated by comma.
	I0531 19:09:00.238887   71907 command_runner.go:130] > # gid_mappings = ""
	I0531 19:09:00.238895   71907 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0531 19:09:00.238906   71907 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0531 19:09:00.238916   71907 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0531 19:09:00.238925   71907 command_runner.go:130] > # minimum_mappable_uid = -1
	I0531 19:09:00.238932   71907 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0531 19:09:00.238940   71907 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0531 19:09:00.238950   71907 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0531 19:09:00.238961   71907 command_runner.go:130] > # minimum_mappable_gid = -1
	I0531 19:09:00.238969   71907 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0531 19:09:00.238980   71907 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0531 19:09:00.238990   71907 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0531 19:09:00.238998   71907 command_runner.go:130] > # ctr_stop_timeout = 30
	I0531 19:09:00.239006   71907 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0531 19:09:00.239034   71907 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0531 19:09:00.239045   71907 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0531 19:09:00.239052   71907 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0531 19:09:00.239060   71907 command_runner.go:130] > # drop_infra_ctr = true
	I0531 19:09:00.239068   71907 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0531 19:09:00.239078   71907 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0531 19:09:00.239090   71907 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0531 19:09:00.239099   71907 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0531 19:09:00.239107   71907 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0531 19:09:00.239114   71907 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0531 19:09:00.239122   71907 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0531 19:09:00.239137   71907 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0531 19:09:00.239146   71907 command_runner.go:130] > # pinns_path = ""
	I0531 19:09:00.239154   71907 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0531 19:09:00.239165   71907 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I0531 19:09:00.239176   71907 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I0531 19:09:00.239184   71907 command_runner.go:130] > # default_runtime = "runc"
	I0531 19:09:00.239191   71907 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0531 19:09:00.239200   71907 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0531 19:09:00.239215   71907 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I0531 19:09:00.239225   71907 command_runner.go:130] > # creation as a file is not desired either.
	I0531 19:09:00.239236   71907 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0531 19:09:00.239245   71907 command_runner.go:130] > # the hostname is being managed dynamically.
	I0531 19:09:00.239251   71907 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0531 19:09:00.239258   71907 command_runner.go:130] > # ]
	I0531 19:09:00.239267   71907 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0531 19:09:00.239276   71907 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0531 19:09:00.239284   71907 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I0531 19:09:00.239295   71907 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I0531 19:09:00.239304   71907 command_runner.go:130] > #
	I0531 19:09:00.239311   71907 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I0531 19:09:00.239320   71907 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I0531 19:09:00.239326   71907 command_runner.go:130] > #  runtime_type = "oci"
	I0531 19:09:00.239334   71907 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I0531 19:09:00.239344   71907 command_runner.go:130] > #  privileged_without_host_devices = false
	I0531 19:09:00.239353   71907 command_runner.go:130] > #  allowed_annotations = []
	I0531 19:09:00.239357   71907 command_runner.go:130] > # Where:
	I0531 19:09:00.239364   71907 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I0531 19:09:00.239372   71907 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I0531 19:09:00.239383   71907 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0531 19:09:00.239391   71907 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0531 19:09:00.239399   71907 command_runner.go:130] > #   in $PATH.
	I0531 19:09:00.239407   71907 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I0531 19:09:00.239416   71907 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0531 19:09:00.239426   71907 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I0531 19:09:00.239434   71907 command_runner.go:130] > #   state.
	I0531 19:09:00.239442   71907 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0531 19:09:00.239449   71907 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0531 19:09:00.239461   71907 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0531 19:09:00.239468   71907 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0531 19:09:00.239479   71907 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0531 19:09:00.239487   71907 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0531 19:09:00.239496   71907 command_runner.go:130] > #   The currently recognized values are:
	I0531 19:09:00.239504   71907 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0531 19:09:00.239516   71907 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0531 19:09:00.239524   71907 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0531 19:09:00.239531   71907 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0531 19:09:00.239545   71907 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0531 19:09:00.239554   71907 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0531 19:09:00.239565   71907 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0531 19:09:00.239573   71907 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I0531 19:09:00.239583   71907 command_runner.go:130] > #   should be moved to the container's cgroup
	I0531 19:09:00.239592   71907 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0531 19:09:00.239602   71907 command_runner.go:130] > runtime_path = "/usr/lib/cri-o-runc/sbin/runc"
	I0531 19:09:00.239610   71907 command_runner.go:130] > runtime_type = "oci"
	I0531 19:09:00.239616   71907 command_runner.go:130] > runtime_root = "/run/runc"
	I0531 19:09:00.239624   71907 command_runner.go:130] > runtime_config_path = ""
	I0531 19:09:00.239629   71907 command_runner.go:130] > monitor_path = ""
	I0531 19:09:00.239638   71907 command_runner.go:130] > monitor_cgroup = ""
	I0531 19:09:00.239644   71907 command_runner.go:130] > monitor_exec_cgroup = ""
	I0531 19:09:00.239679   71907 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I0531 19:09:00.239689   71907 command_runner.go:130] > # running containers
	I0531 19:09:00.239695   71907 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I0531 19:09:00.239703   71907 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I0531 19:09:00.239716   71907 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I0531 19:09:00.239723   71907 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I0531 19:09:00.239733   71907 command_runner.go:130] > # Kata Containers with the default configured VMM
	I0531 19:09:00.239743   71907 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I0531 19:09:00.239749   71907 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I0531 19:09:00.239758   71907 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I0531 19:09:00.239764   71907 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I0531 19:09:00.239770   71907 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I0531 19:09:00.239780   71907 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0531 19:09:00.239792   71907 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0531 19:09:00.239800   71907 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0531 19:09:00.239813   71907 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0531 19:09:00.239826   71907 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0531 19:09:00.239836   71907 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0531 19:09:00.239848   71907 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0531 19:09:00.239858   71907 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0531 19:09:00.239869   71907 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0531 19:09:00.239878   71907 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0531 19:09:00.239886   71907 command_runner.go:130] > # Example:
	I0531 19:09:00.239893   71907 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0531 19:09:00.239902   71907 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0531 19:09:00.239913   71907 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0531 19:09:00.239922   71907 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0531 19:09:00.239927   71907 command_runner.go:130] > # cpuset = 0
	I0531 19:09:00.239932   71907 command_runner.go:130] > # cpushares = "0-1"
	I0531 19:09:00.239937   71907 command_runner.go:130] > # Where:
	I0531 19:09:00.239949   71907 command_runner.go:130] > # The workload name is workload-type.
	I0531 19:09:00.239959   71907 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0531 19:09:00.239969   71907 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0531 19:09:00.239976   71907 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0531 19:09:00.239986   71907 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0531 19:09:00.239993   71907 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0531 19:09:00.239997   71907 command_runner.go:130] > # 
	I0531 19:09:00.240005   71907 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0531 19:09:00.240009   71907 command_runner.go:130] > #
	I0531 19:09:00.240016   71907 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0531 19:09:00.240024   71907 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0531 19:09:00.240032   71907 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0531 19:09:00.240040   71907 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0531 19:09:00.240047   71907 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0531 19:09:00.240051   71907 command_runner.go:130] > [crio.image]
	I0531 19:09:00.240059   71907 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0531 19:09:00.240066   71907 command_runner.go:130] > # default_transport = "docker://"
	I0531 19:09:00.240074   71907 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0531 19:09:00.240081   71907 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0531 19:09:00.240087   71907 command_runner.go:130] > # global_auth_file = ""
	I0531 19:09:00.240093   71907 command_runner.go:130] > # The image used to instantiate infra containers.
	I0531 19:09:00.240099   71907 command_runner.go:130] > # This option supports live configuration reload.
	I0531 19:09:00.240105   71907 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I0531 19:09:00.240114   71907 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0531 19:09:00.240121   71907 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0531 19:09:00.240128   71907 command_runner.go:130] > # This option supports live configuration reload.
	I0531 19:09:00.240133   71907 command_runner.go:130] > # pause_image_auth_file = ""
	I0531 19:09:00.240141   71907 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0531 19:09:00.240149   71907 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0531 19:09:00.240158   71907 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0531 19:09:00.240165   71907 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0531 19:09:00.240171   71907 command_runner.go:130] > # pause_command = "/pause"
	I0531 19:09:00.240180   71907 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0531 19:09:00.240189   71907 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0531 19:09:00.240198   71907 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0531 19:09:00.240206   71907 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0531 19:09:00.240217   71907 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0531 19:09:00.240222   71907 command_runner.go:130] > # signature_policy = ""
	I0531 19:09:00.240245   71907 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0531 19:09:00.240253   71907 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0531 19:09:00.240258   71907 command_runner.go:130] > # changing them here.
	I0531 19:09:00.240265   71907 command_runner.go:130] > # insecure_registries = [
	I0531 19:09:00.240273   71907 command_runner.go:130] > # ]
	I0531 19:09:00.240281   71907 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0531 19:09:00.240288   71907 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0531 19:09:00.240299   71907 command_runner.go:130] > # image_volumes = "mkdir"
	I0531 19:09:00.240307   71907 command_runner.go:130] > # Temporary directory to use for storing big files
	I0531 19:09:00.240313   71907 command_runner.go:130] > # big_files_temporary_dir = ""
	I0531 19:09:00.240322   71907 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0531 19:09:00.240331   71907 command_runner.go:130] > # CNI plugins.
	I0531 19:09:00.240337   71907 command_runner.go:130] > [crio.network]
	I0531 19:09:00.240345   71907 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0531 19:09:00.240353   71907 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0531 19:09:00.240358   71907 command_runner.go:130] > # cni_default_network = ""
	I0531 19:09:00.240368   71907 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0531 19:09:00.240375   71907 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0531 19:09:00.240382   71907 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0531 19:09:00.240386   71907 command_runner.go:130] > # plugin_dirs = [
	I0531 19:09:00.240392   71907 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0531 19:09:00.240396   71907 command_runner.go:130] > # ]
	I0531 19:09:00.240403   71907 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0531 19:09:00.240408   71907 command_runner.go:130] > [crio.metrics]
	I0531 19:09:00.240414   71907 command_runner.go:130] > # Globally enable or disable metrics support.
	I0531 19:09:00.240419   71907 command_runner.go:130] > # enable_metrics = false
	I0531 19:09:00.240425   71907 command_runner.go:130] > # Specify enabled metrics collectors.
	I0531 19:09:00.240431   71907 command_runner.go:130] > # Per default all metrics are enabled.
	I0531 19:09:00.240440   71907 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0531 19:09:00.240448   71907 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0531 19:09:00.240456   71907 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0531 19:09:00.240467   71907 command_runner.go:130] > # metrics_collectors = [
	I0531 19:09:00.240473   71907 command_runner.go:130] > # 	"operations",
	I0531 19:09:00.240479   71907 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0531 19:09:00.240489   71907 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0531 19:09:00.240499   71907 command_runner.go:130] > # 	"operations_errors",
	I0531 19:09:00.240504   71907 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0531 19:09:00.240510   71907 command_runner.go:130] > # 	"image_pulls_by_name",
	I0531 19:09:00.240515   71907 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0531 19:09:00.240521   71907 command_runner.go:130] > # 	"image_pulls_failures",
	I0531 19:09:00.240529   71907 command_runner.go:130] > # 	"image_pulls_successes",
	I0531 19:09:00.240535   71907 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0531 19:09:00.240543   71907 command_runner.go:130] > # 	"image_layer_reuse",
	I0531 19:09:00.240549   71907 command_runner.go:130] > # 	"containers_oom_total",
	I0531 19:09:00.240557   71907 command_runner.go:130] > # 	"containers_oom",
	I0531 19:09:00.240563   71907 command_runner.go:130] > # 	"processes_defunct",
	I0531 19:09:00.240572   71907 command_runner.go:130] > # 	"operations_total",
	I0531 19:09:00.240577   71907 command_runner.go:130] > # 	"operations_latency_seconds",
	I0531 19:09:00.240583   71907 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0531 19:09:00.240589   71907 command_runner.go:130] > # 	"operations_errors_total",
	I0531 19:09:00.240599   71907 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0531 19:09:00.240606   71907 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0531 19:09:00.240615   71907 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0531 19:09:00.240621   71907 command_runner.go:130] > # 	"image_pulls_success_total",
	I0531 19:09:00.240630   71907 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0531 19:09:00.240636   71907 command_runner.go:130] > # 	"containers_oom_count_total",
	I0531 19:09:00.240643   71907 command_runner.go:130] > # ]
	I0531 19:09:00.240650   71907 command_runner.go:130] > # The port on which the metrics server will listen.
	I0531 19:09:00.240657   71907 command_runner.go:130] > # metrics_port = 9090
	I0531 19:09:00.240664   71907 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0531 19:09:00.240671   71907 command_runner.go:130] > # metrics_socket = ""
	I0531 19:09:00.240678   71907 command_runner.go:130] > # The certificate for the secure metrics server.
	I0531 19:09:00.240689   71907 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0531 19:09:00.240697   71907 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0531 19:09:00.240706   71907 command_runner.go:130] > # certificate on any modification event.
	I0531 19:09:00.240715   71907 command_runner.go:130] > # metrics_cert = ""
	I0531 19:09:00.240723   71907 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0531 19:09:00.240733   71907 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0531 19:09:00.240738   71907 command_runner.go:130] > # metrics_key = ""
	I0531 19:09:00.240746   71907 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0531 19:09:00.240751   71907 command_runner.go:130] > [crio.tracing]
	I0531 19:09:00.240758   71907 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0531 19:09:00.240766   71907 command_runner.go:130] > # enable_tracing = false
	I0531 19:09:00.240773   71907 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0531 19:09:00.240782   71907 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0531 19:09:00.240789   71907 command_runner.go:130] > # Number of samples to collect per million spans.
	I0531 19:09:00.240798   71907 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0531 19:09:00.240810   71907 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0531 19:09:00.240815   71907 command_runner.go:130] > [crio.stats]
	I0531 19:09:00.240823   71907 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0531 19:09:00.240833   71907 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0531 19:09:00.240839   71907 command_runner.go:130] > # stats_collection_period = 0
	I0531 19:09:00.241622   71907 command_runner.go:130] ! time="2023-05-31 19:09:00.232761996Z" level=info msg="Starting CRI-O, version: 1.24.5, git: b007cb6753d97de6218787b6894b0e3cc1dc8ecd(clean)"
	I0531 19:09:00.241649   71907 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0531 19:09:00.241723   71907 cni.go:84] Creating CNI manager for ""
	I0531 19:09:00.241735   71907 cni.go:136] 2 nodes found, recommending kindnet
	I0531 19:09:00.241744   71907 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0531 19:09:00.241765   71907 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.3 APIServerPort:8443 KubernetesVersion:v1.27.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-025078 NodeName:multinode-025078-m02 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.58.3 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0531 19:09:00.241895   71907 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.3
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-025078-m02"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.3
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.27.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0531 19:09:00.241954   71907 kubeadm.go:971] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.27.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=multinode-025078-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.58.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.27.2 ClusterName:multinode-025078 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0531 19:09:00.242024   71907 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.27.2
	I0531 19:09:00.252583   71907 command_runner.go:130] > kubeadm
	I0531 19:09:00.252606   71907 command_runner.go:130] > kubectl
	I0531 19:09:00.252612   71907 command_runner.go:130] > kubelet
	I0531 19:09:00.255593   71907 binaries.go:44] Found k8s binaries, skipping transfer
	I0531 19:09:00.255661   71907 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0531 19:09:00.275332   71907 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0531 19:09:00.298587   71907 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0531 19:09:00.320711   71907 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I0531 19:09:00.325265   71907 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0531 19:09:00.338534   71907 host.go:66] Checking if "multinode-025078" exists ...
	I0531 19:09:00.338845   71907 start.go:301] JoinCluster: &{Name:multinode-025078 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:multinode-025078 Namespace:default APIServerName:minikubeCA APIServerNames:[] A
PIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.27.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:
9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0531 19:09:00.338951   71907 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.2:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0531 19:09:00.339001   71907 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-025078
	I0531 19:09:00.339147   71907 config.go:182] Loaded profile config "multinode-025078": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.2
	I0531 19:09:00.358118   71907 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32847 SSHKeyPath:/home/jenkins/minikube-integration/16569-2389/.minikube/machines/multinode-025078/id_rsa Username:docker}
	I0531 19:09:00.528130   71907 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token 4u95ah.o7m6wpolcmim38f4 --discovery-token-ca-cert-hash sha256:8cab06d0df96335aa364fa490e6822dfb6067993e3a4e02e6ce54947f37c8db2 
	I0531 19:09:00.531839   71907 start.go:322] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.27.2 ContainerRuntime:crio ControlPlane:false Worker:true}
	I0531 19:09:00.531881   71907 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 4u95ah.o7m6wpolcmim38f4 --discovery-token-ca-cert-hash sha256:8cab06d0df96335aa364fa490e6822dfb6067993e3a4e02e6ce54947f37c8db2 --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-025078-m02"
	I0531 19:09:00.579409   71907 command_runner.go:130] > [preflight] Running pre-flight checks
	I0531 19:09:00.625955   71907 command_runner.go:130] > [preflight] The system verification failed. Printing the output from the verification:
	I0531 19:09:00.625975   71907 command_runner.go:130] > KERNEL_VERSION: 5.15.0-1037-aws
	I0531 19:09:00.625981   71907 command_runner.go:130] > OS: Linux
	I0531 19:09:00.625987   71907 command_runner.go:130] > CGROUPS_CPU: enabled
	I0531 19:09:00.625994   71907 command_runner.go:130] > CGROUPS_CPUACCT: enabled
	I0531 19:09:00.626000   71907 command_runner.go:130] > CGROUPS_CPUSET: enabled
	I0531 19:09:00.626010   71907 command_runner.go:130] > CGROUPS_DEVICES: enabled
	I0531 19:09:00.626016   71907 command_runner.go:130] > CGROUPS_FREEZER: enabled
	I0531 19:09:00.626022   71907 command_runner.go:130] > CGROUPS_MEMORY: enabled
	I0531 19:09:00.626033   71907 command_runner.go:130] > CGROUPS_PIDS: enabled
	I0531 19:09:00.626039   71907 command_runner.go:130] > CGROUPS_HUGETLB: enabled
	I0531 19:09:00.626046   71907 command_runner.go:130] > CGROUPS_BLKIO: enabled
	I0531 19:09:00.738472   71907 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0531 19:09:00.738496   71907 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0531 19:09:00.770075   71907 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0531 19:09:00.770314   71907 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0531 19:09:00.770332   71907 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0531 19:09:00.871107   71907 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
	I0531 19:09:03.392047   71907 command_runner.go:130] > This node has joined the cluster:
	I0531 19:09:03.392120   71907 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I0531 19:09:03.392143   71907 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I0531 19:09:03.392169   71907 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I0531 19:09:03.395052   71907 command_runner.go:130] ! W0531 19:09:00.578785    1025 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/crio/crio.sock". Please update your configuration!
	I0531 19:09:03.395080   71907 command_runner.go:130] ! 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1037-aws\n", err: exit status 1
	I0531 19:09:03.395092   71907 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0531 19:09:03.395105   71907 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 4u95ah.o7m6wpolcmim38f4 --discovery-token-ca-cert-hash sha256:8cab06d0df96335aa364fa490e6822dfb6067993e3a4e02e6ce54947f37c8db2 --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-025078-m02": (2.863210613s)
	I0531 19:09:03.395120   71907 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0531 19:09:03.617419   71907 command_runner.go:130] ! Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /lib/systemd/system/kubelet.service.
	I0531 19:09:03.617445   71907 start.go:303] JoinCluster complete in 3.278599936s
	I0531 19:09:03.617456   71907 cni.go:84] Creating CNI manager for ""
	I0531 19:09:03.617462   71907 cni.go:136] 2 nodes found, recommending kindnet
	I0531 19:09:03.617517   71907 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0531 19:09:03.622912   71907 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0531 19:09:03.622935   71907 command_runner.go:130] >   Size: 3841245   	Blocks: 7504       IO Block: 4096   regular file
	I0531 19:09:03.622943   71907 command_runner.go:130] > Device: 36h/54d	Inode: 1306535     Links: 1
	I0531 19:09:03.622951   71907 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0531 19:09:03.622958   71907 command_runner.go:130] > Access: 2023-05-09 19:54:42.000000000 +0000
	I0531 19:09:03.622965   71907 command_runner.go:130] > Modify: 2023-05-09 19:54:42.000000000 +0000
	I0531 19:09:03.622971   71907 command_runner.go:130] > Change: 2023-05-31 18:44:35.901126368 +0000
	I0531 19:09:03.622983   71907 command_runner.go:130] >  Birth: 2023-05-31 18:44:35.857126755 +0000
	I0531 19:09:03.623506   71907 cni.go:181] applying CNI manifest using /var/lib/minikube/binaries/v1.27.2/kubectl ...
	I0531 19:09:03.623525   71907 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0531 19:09:03.669996   71907 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0531 19:09:04.042393   71907 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0531 19:09:04.052167   71907 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0531 19:09:04.057489   71907 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0531 19:09:04.077745   71907 command_runner.go:130] > daemonset.apps/kindnet configured
	I0531 19:09:04.083281   71907 loader.go:373] Config loaded from file:  /home/jenkins/minikube-integration/16569-2389/kubeconfig
	I0531 19:09:04.083645   71907 kapi.go:59] client config for multinode-025078: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/16569-2389/.minikube/profiles/multinode-025078/client.crt", KeyFile:"/home/jenkins/minikube-integration/16569-2389/.minikube/profiles/multinode-025078/client.key", CAFile:"/home/jenkins/minikube-integration/16569-2389/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProt
os:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x13dde10), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0531 19:09:04.084068   71907 round_trippers.go:463] GET https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0531 19:09:04.084109   71907 round_trippers.go:469] Request Headers:
	I0531 19:09:04.084130   71907 round_trippers.go:473]     Accept: application/json, */*
	I0531 19:09:04.084151   71907 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0531 19:09:04.089906   71907 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0531 19:09:04.089975   71907 round_trippers.go:577] Response Headers:
	I0531 19:09:04.089997   71907 round_trippers.go:580]     Cache-Control: no-cache, private
	I0531 19:09:04.090022   71907 round_trippers.go:580]     Content-Type: application/json
	I0531 19:09:04.090057   71907 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61ef383b-79fe-44e8-8f9c-d13c8cd9f602
	I0531 19:09:04.090081   71907 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9616b646-f7d3-407a-9b78-ebb52861f5ac
	I0531 19:09:04.090106   71907 round_trippers.go:580]     Content-Length: 291
	I0531 19:09:04.090140   71907 round_trippers.go:580]     Date: Wed, 31 May 2023 19:09:04 GMT
	I0531 19:09:04.090165   71907 round_trippers.go:580]     Audit-Id: 388504ec-854e-4f9d-ae05-23d91eefa0ec
	I0531 19:09:04.090391   71907 request.go:1188] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"d8cb62c2-6f96-4520-9400-e74374977fc2","resourceVersion":"446","creationTimestamp":"2023-05-31T19:08:29Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0531 19:09:04.090536   71907 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-025078" context rescaled to 1 replicas
	I0531 19:09:04.090580   71907 start.go:223] Will wait 6m0s for node &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.27.2 ContainerRuntime:crio ControlPlane:false Worker:true}
	I0531 19:09:04.094044   71907 out.go:177] * Verifying Kubernetes components...
	I0531 19:09:04.095812   71907 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0531 19:09:04.135517   71907 loader.go:373] Config loaded from file:  /home/jenkins/minikube-integration/16569-2389/kubeconfig
	I0531 19:09:04.135799   71907 kapi.go:59] client config for multinode-025078: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/16569-2389/.minikube/profiles/multinode-025078/client.crt", KeyFile:"/home/jenkins/minikube-integration/16569-2389/.minikube/profiles/multinode-025078/client.key", CAFile:"/home/jenkins/minikube-integration/16569-2389/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProt
os:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x13dde10), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0531 19:09:04.136084   71907 node_ready.go:35] waiting up to 6m0s for node "multinode-025078-m02" to be "Ready" ...
	I0531 19:09:04.136148   71907 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-025078-m02
	I0531 19:09:04.136153   71907 round_trippers.go:469] Request Headers:
	I0531 19:09:04.136162   71907 round_trippers.go:473]     Accept: application/json, */*
	I0531 19:09:04.136169   71907 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0531 19:09:04.138811   71907 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0531 19:09:04.138832   71907 round_trippers.go:577] Response Headers:
	I0531 19:09:04.138841   71907 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61ef383b-79fe-44e8-8f9c-d13c8cd9f602
	I0531 19:09:04.138848   71907 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9616b646-f7d3-407a-9b78-ebb52861f5ac
	I0531 19:09:04.138855   71907 round_trippers.go:580]     Date: Wed, 31 May 2023 19:09:04 GMT
	I0531 19:09:04.138861   71907 round_trippers.go:580]     Audit-Id: b9fb2c14-2665-4bf6-9239-728873a75412
	I0531 19:09:04.138868   71907 round_trippers.go:580]     Cache-Control: no-cache, private
	I0531 19:09:04.138875   71907 round_trippers.go:580]     Content-Type: application/json
	I0531 19:09:04.139000   71907 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025078-m02","uid":"2e53b761-1f56-4a20-85d2-5aea4f2417a6","resourceVersion":"485","creationTimestamp":"2023-05-31T19:09:03Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-025078-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-05-31T19:09:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-05-31T19:09:03Z","fieldsTyp
e":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alp [truncated 5183 chars]
	I0531 19:09:04.639941   71907 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-025078-m02
	I0531 19:09:04.639959   71907 round_trippers.go:469] Request Headers:
	I0531 19:09:04.639968   71907 round_trippers.go:473]     Accept: application/json, */*
	I0531 19:09:04.639976   71907 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0531 19:09:04.642610   71907 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0531 19:09:04.642632   71907 round_trippers.go:577] Response Headers:
	I0531 19:09:04.642641   71907 round_trippers.go:580]     Date: Wed, 31 May 2023 19:09:04 GMT
	I0531 19:09:04.642648   71907 round_trippers.go:580]     Audit-Id: fd241027-38bd-4d10-a07f-6f513fbed2a0
	I0531 19:09:04.642655   71907 round_trippers.go:580]     Cache-Control: no-cache, private
	I0531 19:09:04.642662   71907 round_trippers.go:580]     Content-Type: application/json
	I0531 19:09:04.642668   71907 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61ef383b-79fe-44e8-8f9c-d13c8cd9f602
	I0531 19:09:04.642675   71907 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9616b646-f7d3-407a-9b78-ebb52861f5ac
	I0531 19:09:04.642838   71907 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025078-m02","uid":"2e53b761-1f56-4a20-85d2-5aea4f2417a6","resourceVersion":"485","creationTimestamp":"2023-05-31T19:09:03Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-025078-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-05-31T19:09:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-05-31T19:09:03Z","fieldsTyp
e":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alp [truncated 5183 chars]
	I0531 19:09:05.139888   71907 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-025078-m02
	I0531 19:09:05.139910   71907 round_trippers.go:469] Request Headers:
	I0531 19:09:05.139921   71907 round_trippers.go:473]     Accept: application/json, */*
	I0531 19:09:05.139928   71907 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0531 19:09:05.144000   71907 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0531 19:09:05.144072   71907 round_trippers.go:577] Response Headers:
	I0531 19:09:05.144096   71907 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61ef383b-79fe-44e8-8f9c-d13c8cd9f602
	I0531 19:09:05.144116   71907 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9616b646-f7d3-407a-9b78-ebb52861f5ac
	I0531 19:09:05.144151   71907 round_trippers.go:580]     Date: Wed, 31 May 2023 19:09:05 GMT
	I0531 19:09:05.144177   71907 round_trippers.go:580]     Audit-Id: 083e3fa4-b755-498f-a831-207a8e2802e5
	I0531 19:09:05.144199   71907 round_trippers.go:580]     Cache-Control: no-cache, private
	I0531 19:09:05.144231   71907 round_trippers.go:580]     Content-Type: application/json
	I0531 19:09:05.144771   71907 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025078-m02","uid":"2e53b761-1f56-4a20-85d2-5aea4f2417a6","resourceVersion":"485","creationTimestamp":"2023-05-31T19:09:03Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-025078-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-05-31T19:09:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-05-31T19:09:03Z","fieldsTyp
e":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alp [truncated 5183 chars]
	I0531 19:09:05.639597   71907 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-025078-m02
	I0531 19:09:05.639623   71907 round_trippers.go:469] Request Headers:
	I0531 19:09:05.639633   71907 round_trippers.go:473]     Accept: application/json, */*
	I0531 19:09:05.639648   71907 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0531 19:09:05.642249   71907 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0531 19:09:05.642269   71907 round_trippers.go:577] Response Headers:
	I0531 19:09:05.642278   71907 round_trippers.go:580]     Audit-Id: d40bcdfb-26c8-4165-81c9-33237cbb8832
	I0531 19:09:05.642285   71907 round_trippers.go:580]     Cache-Control: no-cache, private
	I0531 19:09:05.642291   71907 round_trippers.go:580]     Content-Type: application/json
	I0531 19:09:05.642298   71907 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61ef383b-79fe-44e8-8f9c-d13c8cd9f602
	I0531 19:09:05.642305   71907 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9616b646-f7d3-407a-9b78-ebb52861f5ac
	I0531 19:09:05.642312   71907 round_trippers.go:580]     Date: Wed, 31 May 2023 19:09:05 GMT
	I0531 19:09:05.642414   71907 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025078-m02","uid":"2e53b761-1f56-4a20-85d2-5aea4f2417a6","resourceVersion":"503","creationTimestamp":"2023-05-31T19:09:03Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-025078-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-05-31T19:09:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-05-31T19:09:03Z","fieldsTyp
e":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alp [truncated 5258 chars]
	I0531 19:09:05.642800   71907 node_ready.go:49] node "multinode-025078-m02" has status "Ready":"True"
	I0531 19:09:05.642810   71907 node_ready.go:38] duration metric: took 1.506715s waiting for node "multinode-025078-m02" to be "Ready" ...
	I0531 19:09:05.642819   71907 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0531 19:09:05.642876   71907 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I0531 19:09:05.642881   71907 round_trippers.go:469] Request Headers:
	I0531 19:09:05.642889   71907 round_trippers.go:473]     Accept: application/json, */*
	I0531 19:09:05.642896   71907 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0531 19:09:05.646818   71907 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0531 19:09:05.646881   71907 round_trippers.go:577] Response Headers:
	I0531 19:09:05.646897   71907 round_trippers.go:580]     Cache-Control: no-cache, private
	I0531 19:09:05.646905   71907 round_trippers.go:580]     Content-Type: application/json
	I0531 19:09:05.646912   71907 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61ef383b-79fe-44e8-8f9c-d13c8cd9f602
	I0531 19:09:05.646919   71907 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9616b646-f7d3-407a-9b78-ebb52861f5ac
	I0531 19:09:05.646932   71907 round_trippers.go:580]     Date: Wed, 31 May 2023 19:09:05 GMT
	I0531 19:09:05.646939   71907 round_trippers.go:580]     Audit-Id: 48044ebf-3234-4a7e-9e16-f6005916b26a
	I0531 19:09:05.647389   71907 request.go:1188] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"503"},"items":[{"metadata":{"name":"coredns-5d78c9869d-hhw4h","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"3f927f21-c6c7-43a2-a635-4fd3c672172d","resourceVersion":"442","creationTimestamp":"2023-05-31T19:08:43Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"463ad278-b9f3-4d58-8542-8ec925fef61a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-05-31T19:08:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"463ad278-b9f3-4d58-8542-8ec925fef61a\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 68970 chars]
	I0531 19:09:05.650307   71907 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5d78c9869d-hhw4h" in "kube-system" namespace to be "Ready" ...
	I0531 19:09:05.650387   71907 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-hhw4h
	I0531 19:09:05.650398   71907 round_trippers.go:469] Request Headers:
	I0531 19:09:05.650409   71907 round_trippers.go:473]     Accept: application/json, */*
	I0531 19:09:05.650426   71907 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0531 19:09:05.653090   71907 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0531 19:09:05.653129   71907 round_trippers.go:577] Response Headers:
	I0531 19:09:05.653139   71907 round_trippers.go:580]     Date: Wed, 31 May 2023 19:09:05 GMT
	I0531 19:09:05.653146   71907 round_trippers.go:580]     Audit-Id: c4e73896-63c8-449d-95bd-a3760c8936e5
	I0531 19:09:05.653153   71907 round_trippers.go:580]     Cache-Control: no-cache, private
	I0531 19:09:05.653161   71907 round_trippers.go:580]     Content-Type: application/json
	I0531 19:09:05.653170   71907 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61ef383b-79fe-44e8-8f9c-d13c8cd9f602
	I0531 19:09:05.653180   71907 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9616b646-f7d3-407a-9b78-ebb52861f5ac
	I0531 19:09:05.653277   71907 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-hhw4h","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"3f927f21-c6c7-43a2-a635-4fd3c672172d","resourceVersion":"442","creationTimestamp":"2023-05-31T19:08:43Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"463ad278-b9f3-4d58-8542-8ec925fef61a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-05-31T19:08:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"463ad278-b9f3-4d58-8542-8ec925fef61a\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6263 chars]
	I0531 19:09:05.653842   71907 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-025078
	I0531 19:09:05.653856   71907 round_trippers.go:469] Request Headers:
	I0531 19:09:05.653864   71907 round_trippers.go:473]     Accept: application/json, */*
	I0531 19:09:05.653871   71907 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0531 19:09:05.656495   71907 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0531 19:09:05.656518   71907 round_trippers.go:577] Response Headers:
	I0531 19:09:05.656528   71907 round_trippers.go:580]     Cache-Control: no-cache, private
	I0531 19:09:05.656535   71907 round_trippers.go:580]     Content-Type: application/json
	I0531 19:09:05.656542   71907 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61ef383b-79fe-44e8-8f9c-d13c8cd9f602
	I0531 19:09:05.656549   71907 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9616b646-f7d3-407a-9b78-ebb52861f5ac
	I0531 19:09:05.656565   71907 round_trippers.go:580]     Date: Wed, 31 May 2023 19:09:05 GMT
	I0531 19:09:05.656576   71907 round_trippers.go:580]     Audit-Id: 12874046-ed0c-4c8c-b551-032c07cb1efd
	I0531 19:09:05.657839   71907 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025078","uid":"527e40a2-2af7-4de3-b462-a471bac33c44","resourceVersion":"421","creationTimestamp":"2023-05-31T19:08:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-025078","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7022875d4a054c2d518e5e5a7b9d500799d50140","minikube.k8s.io/name":"multinode-025078","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_05_31T19_08_30_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-05-31T19:08:26Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I0531 19:09:05.658247   71907 pod_ready.go:92] pod "coredns-5d78c9869d-hhw4h" in "kube-system" namespace has status "Ready":"True"
	I0531 19:09:05.658263   71907 pod_ready.go:81] duration metric: took 7.930516ms waiting for pod "coredns-5d78c9869d-hhw4h" in "kube-system" namespace to be "Ready" ...
	I0531 19:09:05.658274   71907 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-025078" in "kube-system" namespace to be "Ready" ...
	I0531 19:09:05.658333   71907 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-025078
	I0531 19:09:05.658343   71907 round_trippers.go:469] Request Headers:
	I0531 19:09:05.658352   71907 round_trippers.go:473]     Accept: application/json, */*
	I0531 19:09:05.658360   71907 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0531 19:09:05.660851   71907 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0531 19:09:05.660874   71907 round_trippers.go:577] Response Headers:
	I0531 19:09:05.660883   71907 round_trippers.go:580]     Date: Wed, 31 May 2023 19:09:05 GMT
	I0531 19:09:05.660890   71907 round_trippers.go:580]     Audit-Id: 8031091c-f846-4025-b65c-ee9f1d0f98d4
	I0531 19:09:05.660897   71907 round_trippers.go:580]     Cache-Control: no-cache, private
	I0531 19:09:05.660904   71907 round_trippers.go:580]     Content-Type: application/json
	I0531 19:09:05.660916   71907 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61ef383b-79fe-44e8-8f9c-d13c8cd9f602
	I0531 19:09:05.660923   71907 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9616b646-f7d3-407a-9b78-ebb52861f5ac
	I0531 19:09:05.661089   71907 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-025078","namespace":"kube-system","uid":"ae9f84a1-9fff-46b4-b27d-4459cba13a8b","resourceVersion":"450","creationTimestamp":"2023-05-31T19:08:29Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"53d7bfe459fc0f27c783040121a9fd6b","kubernetes.io/config.mirror":"53d7bfe459fc0f27c783040121a9fd6b","kubernetes.io/config.seen":"2023-05-31T19:08:29.235417191Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-025078","uid":"527e40a2-2af7-4de3-b462-a471bac33c44","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-05-31T19:08:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 5833 chars]
	I0531 19:09:05.661581   71907 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-025078
	I0531 19:09:05.661594   71907 round_trippers.go:469] Request Headers:
	I0531 19:09:05.661603   71907 round_trippers.go:473]     Accept: application/json, */*
	I0531 19:09:05.661611   71907 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0531 19:09:05.664013   71907 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0531 19:09:05.664067   71907 round_trippers.go:577] Response Headers:
	I0531 19:09:05.664092   71907 round_trippers.go:580]     Audit-Id: fdf5cb7f-38aa-43b5-8477-1dfb16f20225
	I0531 19:09:05.664118   71907 round_trippers.go:580]     Cache-Control: no-cache, private
	I0531 19:09:05.664153   71907 round_trippers.go:580]     Content-Type: application/json
	I0531 19:09:05.664167   71907 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61ef383b-79fe-44e8-8f9c-d13c8cd9f602
	I0531 19:09:05.664174   71907 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9616b646-f7d3-407a-9b78-ebb52861f5ac
	I0531 19:09:05.664181   71907 round_trippers.go:580]     Date: Wed, 31 May 2023 19:09:05 GMT
	I0531 19:09:05.664329   71907 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025078","uid":"527e40a2-2af7-4de3-b462-a471bac33c44","resourceVersion":"421","creationTimestamp":"2023-05-31T19:08:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-025078","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7022875d4a054c2d518e5e5a7b9d500799d50140","minikube.k8s.io/name":"multinode-025078","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_05_31T19_08_30_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-05-31T19:08:26Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I0531 19:09:05.664755   71907 pod_ready.go:92] pod "etcd-multinode-025078" in "kube-system" namespace has status "Ready":"True"
	I0531 19:09:05.664773   71907 pod_ready.go:81] duration metric: took 6.492678ms waiting for pod "etcd-multinode-025078" in "kube-system" namespace to be "Ready" ...
	I0531 19:09:05.664793   71907 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-025078" in "kube-system" namespace to be "Ready" ...
	I0531 19:09:05.664866   71907 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-025078
	I0531 19:09:05.664874   71907 round_trippers.go:469] Request Headers:
	I0531 19:09:05.664883   71907 round_trippers.go:473]     Accept: application/json, */*
	I0531 19:09:05.664890   71907 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0531 19:09:05.667469   71907 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0531 19:09:05.667495   71907 round_trippers.go:577] Response Headers:
	I0531 19:09:05.667504   71907 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61ef383b-79fe-44e8-8f9c-d13c8cd9f602
	I0531 19:09:05.667511   71907 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9616b646-f7d3-407a-9b78-ebb52861f5ac
	I0531 19:09:05.667525   71907 round_trippers.go:580]     Date: Wed, 31 May 2023 19:09:05 GMT
	I0531 19:09:05.667538   71907 round_trippers.go:580]     Audit-Id: 13028837-2a85-44e5-ad72-a53774821564
	I0531 19:09:05.667546   71907 round_trippers.go:580]     Cache-Control: no-cache, private
	I0531 19:09:05.667553   71907 round_trippers.go:580]     Content-Type: application/json
	I0531 19:09:05.667913   71907 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-025078","namespace":"kube-system","uid":"c0951b5b-7914-475e-93d5-2b8513832b1e","resourceVersion":"451","creationTimestamp":"2023-05-31T19:08:27Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"030748278c563db19133b8fcd1436188","kubernetes.io/config.mirror":"030748278c563db19133b8fcd1436188","kubernetes.io/config.seen":"2023-05-31T19:08:20.983574828Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-025078","uid":"527e40a2-2af7-4de3-b462-a471bac33c44","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-05-31T19:08:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 8219 chars]
	I0531 19:09:05.668483   71907 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-025078
	I0531 19:09:05.668501   71907 round_trippers.go:469] Request Headers:
	I0531 19:09:05.668510   71907 round_trippers.go:473]     Accept: application/json, */*
	I0531 19:09:05.668518   71907 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0531 19:09:05.670996   71907 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0531 19:09:05.671049   71907 round_trippers.go:577] Response Headers:
	I0531 19:09:05.671071   71907 round_trippers.go:580]     Audit-Id: 9b085e6e-f215-4451-b6b0-7e1eb9d28ccd
	I0531 19:09:05.671079   71907 round_trippers.go:580]     Cache-Control: no-cache, private
	I0531 19:09:05.671086   71907 round_trippers.go:580]     Content-Type: application/json
	I0531 19:09:05.671093   71907 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61ef383b-79fe-44e8-8f9c-d13c8cd9f602
	I0531 19:09:05.671106   71907 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9616b646-f7d3-407a-9b78-ebb52861f5ac
	I0531 19:09:05.671126   71907 round_trippers.go:580]     Date: Wed, 31 May 2023 19:09:05 GMT
	I0531 19:09:05.671462   71907 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025078","uid":"527e40a2-2af7-4de3-b462-a471bac33c44","resourceVersion":"421","creationTimestamp":"2023-05-31T19:08:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-025078","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7022875d4a054c2d518e5e5a7b9d500799d50140","minikube.k8s.io/name":"multinode-025078","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_05_31T19_08_30_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-05-31T19:08:26Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I0531 19:09:05.671898   71907 pod_ready.go:92] pod "kube-apiserver-multinode-025078" in "kube-system" namespace has status "Ready":"True"
	I0531 19:09:05.671916   71907 pod_ready.go:81] duration metric: took 7.113497ms waiting for pod "kube-apiserver-multinode-025078" in "kube-system" namespace to be "Ready" ...
	I0531 19:09:05.671929   71907 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-025078" in "kube-system" namespace to be "Ready" ...
	I0531 19:09:05.671993   71907 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-025078
	I0531 19:09:05.672003   71907 round_trippers.go:469] Request Headers:
	I0531 19:09:05.672012   71907 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0531 19:09:05.672019   71907 round_trippers.go:473]     Accept: application/json, */*
	I0531 19:09:05.674583   71907 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0531 19:09:05.674634   71907 round_trippers.go:577] Response Headers:
	I0531 19:09:05.674663   71907 round_trippers.go:580]     Date: Wed, 31 May 2023 19:09:05 GMT
	I0531 19:09:05.674673   71907 round_trippers.go:580]     Audit-Id: a1e89b81-f586-40af-922e-d163c3f9fc38
	I0531 19:09:05.674684   71907 round_trippers.go:580]     Cache-Control: no-cache, private
	I0531 19:09:05.674692   71907 round_trippers.go:580]     Content-Type: application/json
	I0531 19:09:05.674702   71907 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61ef383b-79fe-44e8-8f9c-d13c8cd9f602
	I0531 19:09:05.674708   71907 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9616b646-f7d3-407a-9b78-ebb52861f5ac
	I0531 19:09:05.674870   71907 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-025078","namespace":"kube-system","uid":"5c741586-53ad-4e55-9aba-d0f8355f2eec","resourceVersion":"452","creationTimestamp":"2023-05-31T19:08:27Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"0260a712665b8daacf232950e34a5748","kubernetes.io/config.mirror":"0260a712665b8daacf232950e34a5748","kubernetes.io/config.seen":"2023-05-31T19:08:20.983576034Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-025078","uid":"527e40a2-2af7-4de3-b462-a471bac33c44","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-05-31T19:08:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7794 chars]
	I0531 19:09:05.675410   71907 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-025078
	I0531 19:09:05.675426   71907 round_trippers.go:469] Request Headers:
	I0531 19:09:05.675434   71907 round_trippers.go:473]     Accept: application/json, */*
	I0531 19:09:05.675442   71907 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0531 19:09:05.677739   71907 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0531 19:09:05.677760   71907 round_trippers.go:577] Response Headers:
	I0531 19:09:05.677769   71907 round_trippers.go:580]     Audit-Id: 2f62dff7-d08e-41e2-92ba-58139f620cc1
	I0531 19:09:05.677777   71907 round_trippers.go:580]     Cache-Control: no-cache, private
	I0531 19:09:05.677784   71907 round_trippers.go:580]     Content-Type: application/json
	I0531 19:09:05.677790   71907 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61ef383b-79fe-44e8-8f9c-d13c8cd9f602
	I0531 19:09:05.677800   71907 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9616b646-f7d3-407a-9b78-ebb52861f5ac
	I0531 19:09:05.677807   71907 round_trippers.go:580]     Date: Wed, 31 May 2023 19:09:05 GMT
	I0531 19:09:05.678015   71907 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025078","uid":"527e40a2-2af7-4de3-b462-a471bac33c44","resourceVersion":"421","creationTimestamp":"2023-05-31T19:08:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-025078","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7022875d4a054c2d518e5e5a7b9d500799d50140","minikube.k8s.io/name":"multinode-025078","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_05_31T19_08_30_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-05-31T19:08:26Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I0531 19:09:05.678446   71907 pod_ready.go:92] pod "kube-controller-manager-multinode-025078" in "kube-system" namespace has status "Ready":"True"
	I0531 19:09:05.678463   71907 pod_ready.go:81] duration metric: took 6.522856ms waiting for pod "kube-controller-manager-multinode-025078" in "kube-system" namespace to be "Ready" ...
	I0531 19:09:05.678475   71907 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-hwxjb" in "kube-system" namespace to be "Ready" ...
	I0531 19:09:05.839873   71907 request.go:628] Waited for 161.326015ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hwxjb
	I0531 19:09:05.839946   71907 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hwxjb
	I0531 19:09:05.839957   71907 round_trippers.go:469] Request Headers:
	I0531 19:09:05.839966   71907 round_trippers.go:473]     Accept: application/json, */*
	I0531 19:09:05.839974   71907 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0531 19:09:05.842673   71907 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0531 19:09:05.842700   71907 round_trippers.go:577] Response Headers:
	I0531 19:09:05.842711   71907 round_trippers.go:580]     Content-Type: application/json
	I0531 19:09:05.842718   71907 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61ef383b-79fe-44e8-8f9c-d13c8cd9f602
	I0531 19:09:05.842725   71907 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9616b646-f7d3-407a-9b78-ebb52861f5ac
	I0531 19:09:05.842747   71907 round_trippers.go:580]     Date: Wed, 31 May 2023 19:09:05 GMT
	I0531 19:09:05.842755   71907 round_trippers.go:580]     Audit-Id: bbc07a3d-7154-4c3c-b6f2-a7f4e89a6b6d
	I0531 19:09:05.842762   71907 round_trippers.go:580]     Cache-Control: no-cache, private
	I0531 19:09:05.843058   71907 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-hwxjb","generateName":"kube-proxy-","namespace":"kube-system","uid":"463d516b-02c6-41b7-9a5a-397a2a1a1d6d","resourceVersion":"497","creationTimestamp":"2023-05-31T19:09:03Z","labels":{"controller-revision-hash":"8bdf7b6c","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"863e2ca0-e19e-4d3d-aad8-d9f365be6205","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-05-31T19:09:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"863e2ca0-e19e-4d3d-aad8-d9f365be6205\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:req
uiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{\ [truncated 5516 chars]
	I0531 19:09:06.039897   71907 request.go:628] Waited for 196.31817ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-025078-m02
	I0531 19:09:06.040008   71907 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-025078-m02
	I0531 19:09:06.040042   71907 round_trippers.go:469] Request Headers:
	I0531 19:09:06.040068   71907 round_trippers.go:473]     Accept: application/json, */*
	I0531 19:09:06.040092   71907 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0531 19:09:06.042709   71907 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0531 19:09:06.042803   71907 round_trippers.go:577] Response Headers:
	I0531 19:09:06.042829   71907 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61ef383b-79fe-44e8-8f9c-d13c8cd9f602
	I0531 19:09:06.042932   71907 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9616b646-f7d3-407a-9b78-ebb52861f5ac
	I0531 19:09:06.042959   71907 round_trippers.go:580]     Date: Wed, 31 May 2023 19:09:06 GMT
	I0531 19:09:06.042980   71907 round_trippers.go:580]     Audit-Id: 2d419e21-3da0-481f-9956-e829ba82094e
	I0531 19:09:06.043006   71907 round_trippers.go:580]     Cache-Control: no-cache, private
	I0531 19:09:06.043040   71907 round_trippers.go:580]     Content-Type: application/json
	I0531 19:09:06.043168   71907 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025078-m02","uid":"2e53b761-1f56-4a20-85d2-5aea4f2417a6","resourceVersion":"503","creationTimestamp":"2023-05-31T19:09:03Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-025078-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-05-31T19:09:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-05-31T19:09:03Z","fieldsTyp
e":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alp [truncated 5258 chars]
	I0531 19:09:06.043558   71907 pod_ready.go:92] pod "kube-proxy-hwxjb" in "kube-system" namespace has status "Ready":"True"
	I0531 19:09:06.043577   71907 pod_ready.go:81] duration metric: took 365.095944ms waiting for pod "kube-proxy-hwxjb" in "kube-system" namespace to be "Ready" ...
	I0531 19:09:06.043591   71907 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-ws8xb" in "kube-system" namespace to be "Ready" ...
	I0531 19:09:06.240022   71907 request.go:628] Waited for 196.356767ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-ws8xb
	I0531 19:09:06.240096   71907 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-ws8xb
	I0531 19:09:06.240123   71907 round_trippers.go:469] Request Headers:
	I0531 19:09:06.240139   71907 round_trippers.go:473]     Accept: application/json, */*
	I0531 19:09:06.240159   71907 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0531 19:09:06.242799   71907 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0531 19:09:06.242834   71907 round_trippers.go:577] Response Headers:
	I0531 19:09:06.242843   71907 round_trippers.go:580]     Audit-Id: b2a4f902-9838-40ba-8ee0-43ba61da036b
	I0531 19:09:06.242850   71907 round_trippers.go:580]     Cache-Control: no-cache, private
	I0531 19:09:06.242857   71907 round_trippers.go:580]     Content-Type: application/json
	I0531 19:09:06.242877   71907 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61ef383b-79fe-44e8-8f9c-d13c8cd9f602
	I0531 19:09:06.242885   71907 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9616b646-f7d3-407a-9b78-ebb52861f5ac
	I0531 19:09:06.242903   71907 round_trippers.go:580]     Date: Wed, 31 May 2023 19:09:06 GMT
	I0531 19:09:06.243024   71907 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-ws8xb","generateName":"kube-proxy-","namespace":"kube-system","uid":"d0bcd8bd-2828-4ad8-affa-aa8fb8b01b14","resourceVersion":"415","creationTimestamp":"2023-05-31T19:08:42Z","labels":{"controller-revision-hash":"8bdf7b6c","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"863e2ca0-e19e-4d3d-aad8-d9f365be6205","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-05-31T19:08:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"863e2ca0-e19e-4d3d-aad8-d9f365be6205\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:req
uiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{\ [truncated 5508 chars]
	I0531 19:09:06.439717   71907 request.go:628] Waited for 196.159222ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-025078
	I0531 19:09:06.439841   71907 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-025078
	I0531 19:09:06.439874   71907 round_trippers.go:469] Request Headers:
	I0531 19:09:06.439887   71907 round_trippers.go:473]     Accept: application/json, */*
	I0531 19:09:06.439908   71907 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0531 19:09:06.443050   71907 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0531 19:09:06.443085   71907 round_trippers.go:577] Response Headers:
	I0531 19:09:06.443095   71907 round_trippers.go:580]     Audit-Id: c0fea28e-b171-41ef-a718-f2e775478e7b
	I0531 19:09:06.443102   71907 round_trippers.go:580]     Cache-Control: no-cache, private
	I0531 19:09:06.443131   71907 round_trippers.go:580]     Content-Type: application/json
	I0531 19:09:06.443145   71907 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61ef383b-79fe-44e8-8f9c-d13c8cd9f602
	I0531 19:09:06.443153   71907 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9616b646-f7d3-407a-9b78-ebb52861f5ac
	I0531 19:09:06.443163   71907 round_trippers.go:580]     Date: Wed, 31 May 2023 19:09:06 GMT
	I0531 19:09:06.443284   71907 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025078","uid":"527e40a2-2af7-4de3-b462-a471bac33c44","resourceVersion":"421","creationTimestamp":"2023-05-31T19:08:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-025078","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7022875d4a054c2d518e5e5a7b9d500799d50140","minikube.k8s.io/name":"multinode-025078","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_05_31T19_08_30_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-05-31T19:08:26Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I0531 19:09:06.443706   71907 pod_ready.go:92] pod "kube-proxy-ws8xb" in "kube-system" namespace has status "Ready":"True"
	I0531 19:09:06.443723   71907 pod_ready.go:81] duration metric: took 400.121117ms waiting for pod "kube-proxy-ws8xb" in "kube-system" namespace to be "Ready" ...
	I0531 19:09:06.443736   71907 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-025078" in "kube-system" namespace to be "Ready" ...
	I0531 19:09:06.640143   71907 request.go:628] Waited for 196.340554ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-025078
	I0531 19:09:06.640253   71907 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-025078
	I0531 19:09:06.640266   71907 round_trippers.go:469] Request Headers:
	I0531 19:09:06.640319   71907 round_trippers.go:473]     Accept: application/json, */*
	I0531 19:09:06.640334   71907 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0531 19:09:06.643189   71907 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0531 19:09:06.643258   71907 round_trippers.go:577] Response Headers:
	I0531 19:09:06.643275   71907 round_trippers.go:580]     Cache-Control: no-cache, private
	I0531 19:09:06.643286   71907 round_trippers.go:580]     Content-Type: application/json
	I0531 19:09:06.643293   71907 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61ef383b-79fe-44e8-8f9c-d13c8cd9f602
	I0531 19:09:06.643300   71907 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9616b646-f7d3-407a-9b78-ebb52861f5ac
	I0531 19:09:06.643307   71907 round_trippers.go:580]     Date: Wed, 31 May 2023 19:09:06 GMT
	I0531 19:09:06.643330   71907 round_trippers.go:580]     Audit-Id: f07b38b2-ed1a-4963-9957-1c755642d905
	I0531 19:09:06.643461   71907 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-025078","namespace":"kube-system","uid":"72981a73-7d31-416c-a55e-9e619fd02ad5","resourceVersion":"449","creationTimestamp":"2023-05-31T19:08:29Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"b11a18f131019d2811faf18cbc677083","kubernetes.io/config.mirror":"b11a18f131019d2811faf18cbc677083","kubernetes.io/config.seen":"2023-05-31T19:08:29.235426307Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-025078","uid":"527e40a2-2af7-4de3-b462-a471bac33c44","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-05-31T19:08:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4676 chars]
	I0531 19:09:06.840247   71907 request.go:628] Waited for 196.348505ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-025078
	I0531 19:09:06.840329   71907 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-025078
	I0531 19:09:06.840340   71907 round_trippers.go:469] Request Headers:
	I0531 19:09:06.840349   71907 round_trippers.go:473]     Accept: application/json, */*
	I0531 19:09:06.840357   71907 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0531 19:09:06.842879   71907 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0531 19:09:06.842903   71907 round_trippers.go:577] Response Headers:
	I0531 19:09:06.842913   71907 round_trippers.go:580]     Date: Wed, 31 May 2023 19:09:06 GMT
	I0531 19:09:06.842921   71907 round_trippers.go:580]     Audit-Id: 65d0e448-9502-4b3e-aafd-6f931ef2d21e
	I0531 19:09:06.842928   71907 round_trippers.go:580]     Cache-Control: no-cache, private
	I0531 19:09:06.842935   71907 round_trippers.go:580]     Content-Type: application/json
	I0531 19:09:06.842945   71907 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61ef383b-79fe-44e8-8f9c-d13c8cd9f602
	I0531 19:09:06.842952   71907 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9616b646-f7d3-407a-9b78-ebb52861f5ac
	I0531 19:09:06.843049   71907 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025078","uid":"527e40a2-2af7-4de3-b462-a471bac33c44","resourceVersion":"421","creationTimestamp":"2023-05-31T19:08:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-025078","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7022875d4a054c2d518e5e5a7b9d500799d50140","minikube.k8s.io/name":"multinode-025078","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_05_31T19_08_30_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-05-31T19:08:26Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I0531 19:09:06.843438   71907 pod_ready.go:92] pod "kube-scheduler-multinode-025078" in "kube-system" namespace has status "Ready":"True"
	I0531 19:09:06.843453   71907 pod_ready.go:81] duration metric: took 399.705375ms waiting for pod "kube-scheduler-multinode-025078" in "kube-system" namespace to be "Ready" ...
	I0531 19:09:06.843466   71907 pod_ready.go:38] duration metric: took 1.2006384s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0531 19:09:06.843485   71907 system_svc.go:44] waiting for kubelet service to be running ....
	I0531 19:09:06.843539   71907 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0531 19:09:06.857298   71907 system_svc.go:56] duration metric: took 13.804532ms WaitForService to wait for kubelet.
	I0531 19:09:06.857379   71907 kubeadm.go:581] duration metric: took 2.766748109s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0531 19:09:06.857413   71907 node_conditions.go:102] verifying NodePressure condition ...
	I0531 19:09:07.039790   71907 request.go:628] Waited for 182.274395ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes
	I0531 19:09:07.039865   71907 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes
	I0531 19:09:07.039881   71907 round_trippers.go:469] Request Headers:
	I0531 19:09:07.039892   71907 round_trippers.go:473]     Accept: application/json, */*
	I0531 19:09:07.039900   71907 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0531 19:09:07.042902   71907 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0531 19:09:07.042971   71907 round_trippers.go:577] Response Headers:
	I0531 19:09:07.043009   71907 round_trippers.go:580]     Audit-Id: 9a9e51ec-c59f-4896-8e17-49293f579f0f
	I0531 19:09:07.043040   71907 round_trippers.go:580]     Cache-Control: no-cache, private
	I0531 19:09:07.043084   71907 round_trippers.go:580]     Content-Type: application/json
	I0531 19:09:07.043109   71907 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61ef383b-79fe-44e8-8f9c-d13c8cd9f602
	I0531 19:09:07.043130   71907 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9616b646-f7d3-407a-9b78-ebb52861f5ac
	I0531 19:09:07.043169   71907 round_trippers.go:580]     Date: Wed, 31 May 2023 19:09:07 GMT
	I0531 19:09:07.043368   71907 request.go:1188] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"504"},"items":[{"metadata":{"name":"multinode-025078","uid":"527e40a2-2af7-4de3-b462-a471bac33c44","resourceVersion":"421","creationTimestamp":"2023-05-31T19:08:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-025078","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7022875d4a054c2d518e5e5a7b9d500799d50140","minikube.k8s.io/name":"multinode-025078","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_05_31T19_08_30_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields
":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":" [truncated 12332 chars]
	I0531 19:09:07.044036   71907 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0531 19:09:07.044057   71907 node_conditions.go:123] node cpu capacity is 2
	I0531 19:09:07.044068   71907 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0531 19:09:07.044074   71907 node_conditions.go:123] node cpu capacity is 2
	I0531 19:09:07.044079   71907 node_conditions.go:105] duration metric: took 186.633865ms to run NodePressure ...
	I0531 19:09:07.044099   71907 start.go:228] waiting for startup goroutines ...
	I0531 19:09:07.044128   71907 start.go:242] writing updated cluster config ...
	I0531 19:09:07.044452   71907 ssh_runner.go:195] Run: rm -f paused
	I0531 19:09:07.107008   71907 start.go:573] kubectl: 1.27.2, cluster: 1.27.2 (minor skew: 0)
	I0531 19:09:07.110595   71907 out.go:177] * Done! kubectl is now configured to use "multinode-025078" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* May 31 19:08:46 multinode-025078 crio[899]: time="2023-05-31 19:08:46.335859038Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/886e0da152e28664f5101bad21795d523aacc21f6ed04998e449f9724f36a4bc/merged/etc/passwd: no such file or directory"
	May 31 19:08:46 multinode-025078 crio[899]: time="2023-05-31 19:08:46.335907366Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/886e0da152e28664f5101bad21795d523aacc21f6ed04998e449f9724f36a4bc/merged/etc/group: no such file or directory"
	May 31 19:08:46 multinode-025078 crio[899]: time="2023-05-31 19:08:46.391147246Z" level=info msg="Created container 8e6685a8f62992bfc86257a931c581597425b2e502c48f19d0d8112e30067f63: kube-system/coredns-5d78c9869d-hhw4h/coredns" id=bca03cf9-21fb-4165-9720-daf7c23e3d04 name=/runtime.v1.RuntimeService/CreateContainer
	May 31 19:08:46 multinode-025078 crio[899]: time="2023-05-31 19:08:46.392664106Z" level=info msg="Starting container: 8e6685a8f62992bfc86257a931c581597425b2e502c48f19d0d8112e30067f63" id=00d6f884-22d6-4ac2-ba9a-d339cbcc283c name=/runtime.v1.RuntimeService/StartContainer
	May 31 19:08:46 multinode-025078 crio[899]: time="2023-05-31 19:08:46.409069576Z" level=info msg="Started container" PID=1954 containerID=8e6685a8f62992bfc86257a931c581597425b2e502c48f19d0d8112e30067f63 description=kube-system/coredns-5d78c9869d-hhw4h/coredns id=00d6f884-22d6-4ac2-ba9a-d339cbcc283c name=/runtime.v1.RuntimeService/StartContainer sandboxID=5e693e7a01cd2130404825e5b85a37bcf90b34ff127fec58e37ed23aaf2dc6d6
	May 31 19:09:09 multinode-025078 crio[899]: time="2023-05-31 19:09:09.231157762Z" level=info msg="Running pod sandbox: default/busybox-67b7f59bb-fn4vn/POD" id=e4ff820f-f6cd-4ba2-80ba-80ed8857f20a name=/runtime.v1.RuntimeService/RunPodSandbox
	May 31 19:09:09 multinode-025078 crio[899]: time="2023-05-31 19:09:09.231226217Z" level=warning msg="Allowed annotations are specified for workload []"
	May 31 19:09:09 multinode-025078 crio[899]: time="2023-05-31 19:09:09.247251735Z" level=info msg="Got pod network &{Name:busybox-67b7f59bb-fn4vn Namespace:default ID:db5fb96ba1c6dbc5a78c55ecf40355622b12ca5808b05a6b88177761569e1077 UID:7b68836a-b29a-467a-8a9c-a17fb53833b2 NetNS:/var/run/netns/1c2f40b6-7b0a-492b-a15a-2fa4ef714ba1 Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	May 31 19:09:09 multinode-025078 crio[899]: time="2023-05-31 19:09:09.247431565Z" level=info msg="Adding pod default_busybox-67b7f59bb-fn4vn to CNI network \"kindnet\" (type=ptp)"
	May 31 19:09:09 multinode-025078 crio[899]: time="2023-05-31 19:09:09.257774835Z" level=info msg="Got pod network &{Name:busybox-67b7f59bb-fn4vn Namespace:default ID:db5fb96ba1c6dbc5a78c55ecf40355622b12ca5808b05a6b88177761569e1077 UID:7b68836a-b29a-467a-8a9c-a17fb53833b2 NetNS:/var/run/netns/1c2f40b6-7b0a-492b-a15a-2fa4ef714ba1 Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	May 31 19:09:09 multinode-025078 crio[899]: time="2023-05-31 19:09:09.257936401Z" level=info msg="Checking pod default_busybox-67b7f59bb-fn4vn for CNI network kindnet (type=ptp)"
	May 31 19:09:09 multinode-025078 crio[899]: time="2023-05-31 19:09:09.277248102Z" level=info msg="Ran pod sandbox db5fb96ba1c6dbc5a78c55ecf40355622b12ca5808b05a6b88177761569e1077 with infra container: default/busybox-67b7f59bb-fn4vn/POD" id=e4ff820f-f6cd-4ba2-80ba-80ed8857f20a name=/runtime.v1.RuntimeService/RunPodSandbox
	May 31 19:09:09 multinode-025078 crio[899]: time="2023-05-31 19:09:09.278702670Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28" id=4b2fc4b8-dbdc-40b3-9661-b9e0373f0930 name=/runtime.v1.ImageService/ImageStatus
	May 31 19:09:09 multinode-025078 crio[899]: time="2023-05-31 19:09:09.279040932Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28 not found" id=4b2fc4b8-dbdc-40b3-9661-b9e0373f0930 name=/runtime.v1.ImageService/ImageStatus
	May 31 19:09:09 multinode-025078 crio[899]: time="2023-05-31 19:09:09.280392214Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28" id=83fe9112-4df3-4f3a-b8d0-b3723b739a40 name=/runtime.v1.ImageService/PullImage
	May 31 19:09:09 multinode-025078 crio[899]: time="2023-05-31 19:09:09.282670619Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28\""
	May 31 19:09:09 multinode-025078 crio[899]: time="2023-05-31 19:09:09.949330222Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28\""
	May 31 19:09:11 multinode-025078 crio[899]: time="2023-05-31 19:09:11.218235136Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:859d41e4316c182cb559f9ae3c5ffcac8602ee1179794a1707c06cd092a008d3" id=83fe9112-4df3-4f3a-b8d0-b3723b739a40 name=/runtime.v1.ImageService/PullImage
	May 31 19:09:11 multinode-025078 crio[899]: time="2023-05-31 19:09:11.219859048Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28" id=5b0fbc3e-4979-4a78-8fae-eccd8c0faf91 name=/runtime.v1.ImageService/ImageStatus
	May 31 19:09:11 multinode-025078 crio[899]: time="2023-05-31 19:09:11.220604076Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:89a35e2ebb6b938201966889b5e8c85b931db6432c5643966116cd1c28bf45cd,RepoTags:[gcr.io/k8s-minikube/busybox:1.28],RepoDigests:[gcr.io/k8s-minikube/busybox@sha256:859d41e4316c182cb559f9ae3c5ffcac8602ee1179794a1707c06cd092a008d3 gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12],Size_:1496796,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=5b0fbc3e-4979-4a78-8fae-eccd8c0faf91 name=/runtime.v1.ImageService/ImageStatus
	May 31 19:09:11 multinode-025078 crio[899]: time="2023-05-31 19:09:11.221560154Z" level=info msg="Creating container: default/busybox-67b7f59bb-fn4vn/busybox" id=d0b2da79-d383-40e2-ae84-d0f553c6b836 name=/runtime.v1.RuntimeService/CreateContainer
	May 31 19:09:11 multinode-025078 crio[899]: time="2023-05-31 19:09:11.221673047Z" level=warning msg="Allowed annotations are specified for workload []"
	May 31 19:09:11 multinode-025078 crio[899]: time="2023-05-31 19:09:11.313740284Z" level=info msg="Created container 402366254bb900683404521089eda354ae6f98bcbf31ad0b26aa7b0af3154b48: default/busybox-67b7f59bb-fn4vn/busybox" id=d0b2da79-d383-40e2-ae84-d0f553c6b836 name=/runtime.v1.RuntimeService/CreateContainer
	May 31 19:09:11 multinode-025078 crio[899]: time="2023-05-31 19:09:11.314512060Z" level=info msg="Starting container: 402366254bb900683404521089eda354ae6f98bcbf31ad0b26aa7b0af3154b48" id=2ec83cb2-7ef4-44d2-9503-8d9443f29cb1 name=/runtime.v1.RuntimeService/StartContainer
	May 31 19:09:11 multinode-025078 crio[899]: time="2023-05-31 19:09:11.328218402Z" level=info msg="Started container" PID=2067 containerID=402366254bb900683404521089eda354ae6f98bcbf31ad0b26aa7b0af3154b48 description=default/busybox-67b7f59bb-fn4vn/busybox id=2ec83cb2-7ef4-44d2-9503-8d9443f29cb1 name=/runtime.v1.RuntimeService/StartContainer sandboxID=db5fb96ba1c6dbc5a78c55ecf40355622b12ca5808b05a6b88177761569e1077
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	402366254bb90       gcr.io/k8s-minikube/busybox@sha256:859d41e4316c182cb559f9ae3c5ffcac8602ee1179794a1707c06cd092a008d3   5 seconds ago       Running             busybox                   0                   db5fb96ba1c6d       busybox-67b7f59bb-fn4vn
	8e6685a8f6299       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108                                      30 seconds ago      Running             coredns                   0                   5e693e7a01cd2       coredns-5d78c9869d-hhw4h
	23ac9997ceefd       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                      31 seconds ago      Running             storage-provisioner       0                   3b8ff3d34c274       storage-provisioner
	16c64bfdf2e0f       29921a084542255eb81a1a660a603b1a24636d88b202f9010daae75fa32754c0                                      33 seconds ago      Running             kube-proxy                0                   bc3329b40fa4e       kube-proxy-ws8xb
	140a540983b71       b18bf71b941bae2e12db1c07e567ad14e4febbc778310a0fc64487f1ac877d79                                      33 seconds ago      Running             kindnet-cni               0                   6eac2c24dac91       kindnet-556pq
	c0c8c61893616       2ee705380c3c59a538b853cbe9ae9886ebbd0001a4cea4add5adeea48e5f48d4                                      55 seconds ago      Running             kube-controller-manager   0                   bdd3441a72f88       kube-controller-manager-multinode-025078
	a4f400bc6dc35       24bc64e911039ecf00e263be2161797c758b7d82403ca5516ab64047a477f737                                      55 seconds ago      Running             etcd                      0                   5af0c53a754c0       etcd-multinode-025078
	94e51ce70c655       305d7ed1dae2877c3a80d434c5fb9f1aac1aa3d2431c36130a3fcd1970e93840                                      55 seconds ago      Running             kube-scheduler            0                   ef69a883de994       kube-scheduler-multinode-025078
	57c382e3990cc       72c9df6be7f1b997e4a31b5cb9aa7262e5278905af97e6a69e341e3f0f9bbaae                                      55 seconds ago      Running             kube-apiserver            0                   bac25473ebe1a       kube-apiserver-multinode-025078
	
	* 
	* ==> coredns [8e6685a8f62992bfc86257a931c581597425b2e502c48f19d0d8112e30067f63] <==
	* [INFO] 10.244.1.2:54136 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000169656s
	[INFO] 10.244.0.3:32952 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000103646s
	[INFO] 10.244.0.3:36177 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001164757s
	[INFO] 10.244.0.3:34383 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000109456s
	[INFO] 10.244.0.3:44091 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000077907s
	[INFO] 10.244.0.3:56271 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.000983081s
	[INFO] 10.244.0.3:42145 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000076332s
	[INFO] 10.244.0.3:38551 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000063417s
	[INFO] 10.244.0.3:42825 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000079384s
	[INFO] 10.244.1.2:47762 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000131412s
	[INFO] 10.244.1.2:48729 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000080476s
	[INFO] 10.244.1.2:40640 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00009851s
	[INFO] 10.244.1.2:36159 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000084176s
	[INFO] 10.244.0.3:42129 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000100528s
	[INFO] 10.244.0.3:59319 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000051462s
	[INFO] 10.244.0.3:45976 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000069227s
	[INFO] 10.244.0.3:40226 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000044989s
	[INFO] 10.244.1.2:40917 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000103565s
	[INFO] 10.244.1.2:55820 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000153509s
	[INFO] 10.244.1.2:47837 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000135416s
	[INFO] 10.244.1.2:57475 - 5 "PTR IN 1.58.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000100487s
	[INFO] 10.244.0.3:37816 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000104171s
	[INFO] 10.244.0.3:52769 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000077858s
	[INFO] 10.244.0.3:39605 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.00008109s
	[INFO] 10.244.0.3:56513 - 5 "PTR IN 1.58.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000083289s
	
	* 
	* ==> describe nodes <==
	* Name:               multinode-025078
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=multinode-025078
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7022875d4a054c2d518e5e5a7b9d500799d50140
	                    minikube.k8s.io/name=multinode-025078
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_05_31T19_08_30_0700
	                    minikube.k8s.io/version=v1.30.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 31 May 2023 19:08:26 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-025078
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 31 May 2023 19:09:10 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 31 May 2023 19:08:44 +0000   Wed, 31 May 2023 19:08:22 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 31 May 2023 19:08:44 +0000   Wed, 31 May 2023 19:08:22 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 31 May 2023 19:08:44 +0000   Wed, 31 May 2023 19:08:22 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 31 May 2023 19:08:44 +0000   Wed, 31 May 2023 19:08:44 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.58.2
	  Hostname:    multinode-025078
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022624Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022624Ki
	  pods:               110
	System Info:
	  Machine ID:                 8c0d0bdd6b6a4f1bae0a4542fc945f58
	  System UUID:                11eb33c2-de17-40d9-acca-aa66193b8096
	  Boot ID:                    35429d0f-2ece-432d-a992-d9f8cda99d9c
	  Kernel Version:             5.15.0-1037-aws
	  OS Image:                   Ubuntu 22.04.2 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.5
	  Kubelet Version:            v1.27.2
	  Kube-Proxy Version:         v1.27.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-67b7f59bb-fn4vn                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10s
	  kube-system                 coredns-5d78c9869d-hhw4h                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     34s
	  kube-system                 etcd-multinode-025078                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         48s
	  kube-system                 kindnet-556pq                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      35s
	  kube-system                 kube-apiserver-multinode-025078             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         50s
	  kube-system                 kube-controller-manager-multinode-025078    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         50s
	  kube-system                 kube-proxy-ws8xb                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         35s
	  kube-system                 kube-scheduler-multinode-025078             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         48s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         33s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%!)(MISSING)  100m (5%!)(MISSING)
	  memory             220Mi (2%!)(MISSING)  220Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 33s                kube-proxy       
	  Normal  NodeHasSufficientMemory  56s (x8 over 56s)  kubelet          Node multinode-025078 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    56s (x8 over 56s)  kubelet          Node multinode-025078 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     56s (x8 over 56s)  kubelet          Node multinode-025078 status is now: NodeHasSufficientPID
	  Normal  Starting                 48s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  48s                kubelet          Node multinode-025078 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    48s                kubelet          Node multinode-025078 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     48s                kubelet          Node multinode-025078 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           35s                node-controller  Node multinode-025078 event: Registered Node multinode-025078 in Controller
	  Normal  NodeReady                33s                kubelet          Node multinode-025078 status is now: NodeReady
	
	
	Name:               multinode-025078-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=multinode-025078-m02
	                    kubernetes.io/os=linux
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 31 May 2023 19:09:03 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-025078-m02
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 31 May 2023 19:09:13 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 31 May 2023 19:09:05 +0000   Wed, 31 May 2023 19:09:03 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 31 May 2023 19:09:05 +0000   Wed, 31 May 2023 19:09:03 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 31 May 2023 19:09:05 +0000   Wed, 31 May 2023 19:09:03 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 31 May 2023 19:09:05 +0000   Wed, 31 May 2023 19:09:05 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.58.3
	  Hostname:    multinode-025078-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022624Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022624Ki
	  pods:               110
	System Info:
	  Machine ID:                 e2483051c1854991b50d0790eadef964
	  System UUID:                617b5cac-5210-4c09-b401-07e342ec6185
	  Boot ID:                    35429d0f-2ece-432d-a992-d9f8cda99d9c
	  Kernel Version:             5.15.0-1037-aws
	  OS Image:                   Ubuntu 22.04.2 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.5
	  Kubelet Version:            v1.27.2
	  Kube-Proxy Version:         v1.27.2
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-67b7f59bb-9zwlk    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10s
	  kube-system                 kindnet-q7g8j              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      14s
	  kube-system                 kube-proxy-hwxjb           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (0%!)(MISSING)  50Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 12s                kube-proxy       
	  Normal  NodeHasSufficientMemory  14s (x5 over 16s)  kubelet          Node multinode-025078-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    14s (x5 over 16s)  kubelet          Node multinode-025078-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     14s (x5 over 16s)  kubelet          Node multinode-025078-m02 status is now: NodeHasSufficientPID
	  Normal  NodeReady                12s                kubelet          Node multinode-025078-m02 status is now: NodeReady
	  Normal  RegisteredNode           10s                node-controller  Node multinode-025078-m02 event: Registered Node multinode-025078-m02 in Controller
	
	* 
	* ==> dmesg <==
	* [  +0.000741] FS-Cache: N-cookie c=0000000c [p=00000003 fl=2 nc=0 na=1]
	[  +0.000954] FS-Cache: N-cookie d=0000000077ffde6d{9p.inode} n=0000000074ec7799
	[  +0.001241] FS-Cache: N-key=[8] '915b3b0000000000'
	[  +0.003042] FS-Cache: Duplicate cookie detected
	[  +0.000777] FS-Cache: O-cookie c=00000006 [p=00000003 fl=226 nc=0 na=1]
	[  +0.000979] FS-Cache: O-cookie d=0000000077ffde6d{9p.inode} n=0000000031e1563a
	[  +0.001057] FS-Cache: O-key=[8] '915b3b0000000000'
	[  +0.000743] FS-Cache: N-cookie c=0000000d [p=00000003 fl=2 nc=0 na=1]
	[  +0.000985] FS-Cache: N-cookie d=0000000077ffde6d{9p.inode} n=000000007278ef73
	[  +0.001110] FS-Cache: N-key=[8] '915b3b0000000000'
	[  +2.905928] FS-Cache: Duplicate cookie detected
	[  +0.000862] FS-Cache: O-cookie c=00000004 [p=00000003 fl=226 nc=0 na=1]
	[  +0.001154] FS-Cache: O-cookie d=0000000077ffde6d{9p.inode} n=00000000ad00c953
	[  +0.001219] FS-Cache: O-key=[8] '905b3b0000000000'
	[  +0.000792] FS-Cache: N-cookie c=0000000f [p=00000003 fl=2 nc=0 na=1]
	[  +0.001108] FS-Cache: N-cookie d=0000000077ffde6d{9p.inode} n=00000000be9b4fe0
	[  +0.001229] FS-Cache: N-key=[8] '905b3b0000000000'
	[  +0.280333] FS-Cache: Duplicate cookie detected
	[  +0.000717] FS-Cache: O-cookie c=00000009 [p=00000003 fl=226 nc=0 na=1]
	[  +0.000979] FS-Cache: O-cookie d=0000000077ffde6d{9p.inode} n=000000003fd4f91a
	[  +0.001109] FS-Cache: O-key=[8] '985b3b0000000000'
	[  +0.000717] FS-Cache: N-cookie c=00000010 [p=00000003 fl=2 nc=0 na=1]
	[  +0.000957] FS-Cache: N-cookie d=0000000077ffde6d{9p.inode} n=0000000074ec7799
	[  +0.001067] FS-Cache: N-key=[8] '985b3b0000000000'
	[  +9.760834] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	
	* 
	* ==> etcd [a4f400bc6dc353b4880a0ec6a98d5849c881142ca1e5253916473f75706a6642] <==
	* {"level":"info","ts":"2023-05-31T19:08:21.892Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 switched to configuration voters=(12882097698489969905)"}
	{"level":"info","ts":"2023-05-31T19:08:21.892Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"3a56e4ca95e2355c","local-member-id":"b2c6679ac05f2cf1","added-peer-id":"b2c6679ac05f2cf1","added-peer-peer-urls":["https://192.168.58.2:2380"]}
	{"level":"info","ts":"2023-05-31T19:08:21.898Z","caller":"embed/etcd.go:687","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-05-31T19:08:21.898Z","caller":"embed/etcd.go:275","msg":"now serving peer/client/metrics","local-member-id":"b2c6679ac05f2cf1","initial-advertise-peer-urls":["https://192.168.58.2:2380"],"listen-peer-urls":["https://192.168.58.2:2380"],"advertise-client-urls":["https://192.168.58.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.58.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-05-31T19:08:21.906Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-05-31T19:08:21.907Z","caller":"embed/etcd.go:586","msg":"serving peer traffic","address":"192.168.58.2:2380"}
	{"level":"info","ts":"2023-05-31T19:08:21.910Z","caller":"embed/etcd.go:558","msg":"cmux::serve","address":"192.168.58.2:2380"}
	{"level":"info","ts":"2023-05-31T19:08:22.374Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 is starting a new election at term 1"}
	{"level":"info","ts":"2023-05-31T19:08:22.374Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became pre-candidate at term 1"}
	{"level":"info","ts":"2023-05-31T19:08:22.374Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgPreVoteResp from b2c6679ac05f2cf1 at term 1"}
	{"level":"info","ts":"2023-05-31T19:08:22.374Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became candidate at term 2"}
	{"level":"info","ts":"2023-05-31T19:08:22.374Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgVoteResp from b2c6679ac05f2cf1 at term 2"}
	{"level":"info","ts":"2023-05-31T19:08:22.374Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became leader at term 2"}
	{"level":"info","ts":"2023-05-31T19:08:22.374Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: b2c6679ac05f2cf1 elected leader b2c6679ac05f2cf1 at term 2"}
	{"level":"info","ts":"2023-05-31T19:08:22.378Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"b2c6679ac05f2cf1","local-member-attributes":"{Name:multinode-025078 ClientURLs:[https://192.168.58.2:2379]}","request-path":"/0/members/b2c6679ac05f2cf1/attributes","cluster-id":"3a56e4ca95e2355c","publish-timeout":"7s"}
	{"level":"info","ts":"2023-05-31T19:08:22.379Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-05-31T19:08:22.380Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-05-31T19:08:22.380Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-05-31T19:08:22.381Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"192.168.58.2:2379"}
	{"level":"info","ts":"2023-05-31T19:08:22.390Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-05-31T19:08:22.422Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-05-31T19:08:22.422Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-05-31T19:08:22.423Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3a56e4ca95e2355c","local-member-id":"b2c6679ac05f2cf1","cluster-version":"3.5"}
	{"level":"info","ts":"2023-05-31T19:08:22.423Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-05-31T19:08:22.423Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	
	* 
	* ==> kernel <==
	*  19:09:17 up 51 min,  0 users,  load average: 1.30, 1.49, 1.18
	Linux multinode-025078 5.15.0-1037-aws #41~20.04.1-Ubuntu SMP Mon May 22 18:20:20 UTC 2023 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.2 LTS"
	
	* 
	* ==> kindnet [140a540983b718dc1023ee44c9d28f543baee70abaf1da436169bfa451306655] <==
	* I0531 19:08:43.928830       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0531 19:08:43.928900       1 main.go:107] hostIP = 192.168.58.2
	podIP = 192.168.58.2
	I0531 19:08:43.929007       1 main.go:116] setting mtu 1500 for CNI 
	I0531 19:08:43.929017       1 main.go:146] kindnetd IP family: "ipv4"
	I0531 19:08:43.929028       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0531 19:08:44.328104       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I0531 19:08:44.328144       1 main.go:227] handling current node
	I0531 19:08:54.343309       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I0531 19:08:54.343337       1 main.go:227] handling current node
	I0531 19:09:04.356934       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I0531 19:09:04.357196       1 main.go:227] handling current node
	I0531 19:09:04.357246       1 main.go:223] Handling node with IPs: map[192.168.58.3:{}]
	I0531 19:09:04.357311       1 main.go:250] Node multinode-025078-m02 has CIDR [10.244.1.0/24] 
	I0531 19:09:04.357473       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 192.168.58.3 Flags: [] Table: 0} 
	I0531 19:09:14.371312       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I0531 19:09:14.371414       1 main.go:227] handling current node
	I0531 19:09:14.371449       1 main.go:223] Handling node with IPs: map[192.168.58.3:{}]
	I0531 19:09:14.371505       1 main.go:250] Node multinode-025078-m02 has CIDR [10.244.1.0/24] 
	
	* 
	* ==> kube-apiserver [57c382e3990cce47ca0b19a8b8fbc37e4b7396fa1cec72790fd3508bc03f1936] <==
	* I0531 19:08:26.128383       1 controller.go:624] quota admission added evaluator for: namespaces
	I0531 19:08:26.136159       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0531 19:08:26.154529       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0531 19:08:26.165657       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0531 19:08:26.495031       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0531 19:08:26.924625       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0531 19:08:26.929892       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0531 19:08:26.929918       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0531 19:08:27.485935       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0531 19:08:27.529240       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0531 19:08:27.606508       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W0531 19:08:27.612434       1 lease.go:251] Resetting endpoints for master service "kubernetes" to [192.168.58.2]
	I0531 19:08:27.613509       1 controller.go:624] quota admission added evaluator for: endpoints
	I0531 19:08:27.618034       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0531 19:08:28.031319       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0531 19:08:29.160806       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0531 19:08:29.172730       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0531 19:08:29.185873       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0531 19:08:42.822859       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I0531 19:08:42.904345       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	E0531 19:09:12.787943       1 watch.go:287] unable to encode watch object *v1.WatchEvent: http2: stream closed (&streaming.encoderWithAllocator{writer:responsewriter.outerWithCloseNotifyAndFlush{UserProvidedDecorator:(*metrics.ResponseWriterDelegator)(0x400cc12ea0), InnerCloseNotifierFlusher:struct { httpsnoop.Unwrapper; http.ResponseWriter; http.Flusher; http.CloseNotifier; http.Pusher }{Unwrapper:(*httpsnoop.rw)(0x4009a5f040), ResponseWriter:(*httpsnoop.rw)(0x4009a5f040), Flusher:(*httpsnoop.rw)(0x4009a5f040), CloseNotifier:(*httpsnoop.rw)(0x4009a5f040), Pusher:(*httpsnoop.rw)(0x4009a5f040)}}, encoder:(*versioning.codec)(0x400ae90000), memAllocator:(*runtime.Allocator)(0x400a446708)})
	E0531 19:09:13.874961       1 upgradeaware.go:440] Error proxying data from backend to client: write tcp 192.168.58.2:8443->192.168.58.1:49078: write: broken pipe
	E0531 19:09:14.117681       1 upgradeaware.go:440] Error proxying data from backend to client: write tcp 192.168.58.2:8443->192.168.58.1:49108: write: broken pipe
	E0531 19:09:14.594819       1 upgradeaware.go:440] Error proxying data from backend to client: write tcp 192.168.58.2:8443->192.168.58.1:49150: write: broken pipe
	E0531 19:09:15.478977       1 upgradeaware.go:426] Error proxying data from client to backend: write tcp 192.168.58.2:48946->192.168.58.2:10250: write: broken pipe
	
	* 
	* ==> kube-controller-manager [c0c8c6189361677c9d562801517d152374e14c9feb5e6dd8d9537cf66b532428] <==
	* I0531 19:08:42.145513       1 event.go:307] "Event occurred" object="kube-system/kube-controller-manager-multinode-025078" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0531 19:08:42.145618       1 event.go:307] "Event occurred" object="kube-system/kube-scheduler-multinode-025078" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0531 19:08:42.145707       1 event.go:307] "Event occurred" object="kube-system/kube-apiserver-multinode-025078" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0531 19:08:42.163231       1 shared_informer.go:318] Caches are synced for resource quota
	I0531 19:08:42.508772       1 shared_informer.go:318] Caches are synced for garbage collector
	I0531 19:08:42.508802       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I0531 19:08:42.527530       1 shared_informer.go:318] Caches are synced for garbage collector
	I0531 19:08:42.849225       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-5d78c9869d to 2"
	I0531 19:08:42.927202       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-556pq"
	I0531 19:08:42.927577       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-ws8xb"
	I0531 19:08:43.001826       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-5d78c9869d to 1 from 2"
	I0531 19:08:43.091620       1 event.go:307] "Event occurred" object="kube-system/coredns-5d78c9869d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5d78c9869d-fs28v"
	I0531 19:08:43.183816       1 event.go:307] "Event occurred" object="kube-system/coredns-5d78c9869d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5d78c9869d-hhw4h"
	I0531 19:08:43.528487       1 event.go:307] "Event occurred" object="kube-system/coredns-5d78c9869d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-5d78c9869d-fs28v"
	I0531 19:08:47.109706       1 node_lifecycle_controller.go:1046] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	I0531 19:09:03.170553       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-025078-m02\" does not exist"
	I0531 19:09:03.197835       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-hwxjb"
	I0531 19:09:03.197941       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-q7g8j"
	I0531 19:09:03.209917       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-025078-m02" podCIDRs=[10.244.1.0/24]
	W0531 19:09:05.153145       1 topologycache.go:232] Can't get CPU or zone information for multinode-025078-m02 node
	I0531 19:09:07.112336       1 event.go:307] "Event occurred" object="multinode-025078-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-025078-m02 event: Registered Node multinode-025078-m02 in Controller"
	I0531 19:09:07.112386       1 node_lifecycle_controller.go:875] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-025078-m02"
	I0531 19:09:07.956441       1 event.go:307] "Event occurred" object="default/busybox" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set busybox-67b7f59bb to 2"
	I0531 19:09:07.984920       1 event.go:307] "Event occurred" object="default/busybox-67b7f59bb" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-67b7f59bb-9zwlk"
	I0531 19:09:07.997943       1 event.go:307] "Event occurred" object="default/busybox-67b7f59bb" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-67b7f59bb-fn4vn"
	
	* 
	* ==> kube-proxy [16c64bfdf2e0f3fe02c18c19bc7f2ca1ae502de6d95f8fc18de87d092da26bb7] <==
	* I0531 19:08:43.990552       1 node.go:141] Successfully retrieved node IP: 192.168.58.2
	I0531 19:08:43.990664       1 server_others.go:110] "Detected node IP" address="192.168.58.2"
	I0531 19:08:43.990708       1 server_others.go:551] "Using iptables proxy"
	I0531 19:08:44.020286       1 server_others.go:190] "Using iptables Proxier"
	I0531 19:08:44.020424       1 server_others.go:197] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0531 19:08:44.020482       1 server_others.go:198] "Creating dualStackProxier for iptables"
	I0531 19:08:44.020533       1 server_others.go:481] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, defaulting to no-op detect-local for IPv6"
	I0531 19:08:44.020659       1 proxier.go:253] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0531 19:08:44.021569       1 server.go:657] "Version info" version="v1.27.2"
	I0531 19:08:44.021972       1 server.go:659] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0531 19:08:44.023061       1 config.go:188] "Starting service config controller"
	I0531 19:08:44.023195       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0531 19:08:44.023286       1 config.go:97] "Starting endpoint slice config controller"
	I0531 19:08:44.023332       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0531 19:08:44.024088       1 config.go:315] "Starting node config controller"
	I0531 19:08:44.024175       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0531 19:08:44.126845       1 shared_informer.go:318] Caches are synced for node config
	I0531 19:08:44.126882       1 shared_informer.go:318] Caches are synced for service config
	I0531 19:08:44.126907       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	* 
	* ==> kube-scheduler [94e51ce70c655e9ce8c93cffa8439902ad5769eae9b6246fc5447328b496900f] <==
	* W0531 19:08:26.401867       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0531 19:08:26.401904       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0531 19:08:26.401962       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0531 19:08:26.401979       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0531 19:08:26.402041       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0531 19:08:26.402054       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0531 19:08:26.402088       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0531 19:08:26.402102       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0531 19:08:26.402136       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0531 19:08:26.402148       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0531 19:08:26.402191       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0531 19:08:26.402206       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0531 19:08:26.402280       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0531 19:08:26.402328       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0531 19:08:26.402346       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0531 19:08:26.402330       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0531 19:08:26.402278       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0531 19:08:26.402381       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0531 19:08:26.402422       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0531 19:08:26.402435       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0531 19:08:26.402500       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0531 19:08:26.402539       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0531 19:08:27.298137       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0531 19:08:27.298174       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0531 19:08:27.985590       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* May 31 19:08:43 multinode-025078 kubelet[1383]: I0531 19:08:43.066367    1383 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d0bcd8bd-2828-4ad8-affa-aa8fb8b01b14-lib-modules\") pod \"kube-proxy-ws8xb\" (UID: \"d0bcd8bd-2828-4ad8-affa-aa8fb8b01b14\") " pod="kube-system/kube-proxy-ws8xb"
	May 31 19:08:43 multinode-025078 kubelet[1383]: W0531 19:08:43.628773    1383 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/9d3cbea3acdb0b8d1b32dece59d31af9111f937e060f05b7083a59033f8e8705/crio/crio-6eac2c24dac91a08d48a545f4fc0d5f5d76cd1ed406ccbb767a06245bb1de4ea WatchSource:0}: Error finding container 6eac2c24dac91a08d48a545f4fc0d5f5d76cd1ed406ccbb767a06245bb1de4ea: Status 404 returned error can't find the container with id 6eac2c24dac91a08d48a545f4fc0d5f5d76cd1ed406ccbb767a06245bb1de4ea
	May 31 19:08:43 multinode-025078 kubelet[1383]: W0531 19:08:43.678668    1383 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/9d3cbea3acdb0b8d1b32dece59d31af9111f937e060f05b7083a59033f8e8705/crio/crio-bc3329b40fa4ef14d10020e182575a61ac45cc46ffe3edec9f11a4cd7c095e2a WatchSource:0}: Error finding container bc3329b40fa4ef14d10020e182575a61ac45cc46ffe3edec9f11a4cd7c095e2a: Status 404 returned error can't find the container with id bc3329b40fa4ef14d10020e182575a61ac45cc46ffe3edec9f11a4cd7c095e2a
	May 31 19:08:44 multinode-025078 kubelet[1383]: I0531 19:08:44.399217    1383 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-556pq" podStartSLOduration=2.399165464 podCreationTimestamp="2023-05-31 19:08:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-05-31 19:08:44.399040477 +0000 UTC m=+15.263099317" watchObservedRunningTime="2023-05-31 19:08:44.399165464 +0000 UTC m=+15.263224296"
	May 31 19:08:44 multinode-025078 kubelet[1383]: I0531 19:08:44.399340    1383 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-ws8xb" podStartSLOduration=2.399321762 podCreationTimestamp="2023-05-31 19:08:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-05-31 19:08:44.385670304 +0000 UTC m=+15.249729144" watchObservedRunningTime="2023-05-31 19:08:44.399321762 +0000 UTC m=+15.263380611"
	May 31 19:08:44 multinode-025078 kubelet[1383]: I0531 19:08:44.445597    1383 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	May 31 19:08:44 multinode-025078 kubelet[1383]: I0531 19:08:44.473313    1383 topology_manager.go:212] "Topology Admit Handler"
	May 31 19:08:44 multinode-025078 kubelet[1383]: W0531 19:08:44.478494    1383 reflector.go:533] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:multinode-025078" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'multinode-025078' and this object
	May 31 19:08:44 multinode-025078 kubelet[1383]: E0531 19:08:44.478539    1383 reflector.go:148] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:multinode-025078" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'multinode-025078' and this object
	May 31 19:08:44 multinode-025078 kubelet[1383]: I0531 19:08:44.581356    1383 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3f927f21-c6c7-43a2-a635-4fd3c672172d-config-volume\") pod \"coredns-5d78c9869d-hhw4h\" (UID: \"3f927f21-c6c7-43a2-a635-4fd3c672172d\") " pod="kube-system/coredns-5d78c9869d-hhw4h"
	May 31 19:08:44 multinode-025078 kubelet[1383]: I0531 19:08:44.581408    1383 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-959nm\" (UniqueName: \"kubernetes.io/projected/3f927f21-c6c7-43a2-a635-4fd3c672172d-kube-api-access-959nm\") pod \"coredns-5d78c9869d-hhw4h\" (UID: \"3f927f21-c6c7-43a2-a635-4fd3c672172d\") " pod="kube-system/coredns-5d78c9869d-hhw4h"
	May 31 19:08:45 multinode-025078 kubelet[1383]: I0531 19:08:45.205326    1383 topology_manager.go:212] "Topology Admit Handler"
	May 31 19:08:45 multinode-025078 kubelet[1383]: I0531 19:08:45.285630    1383 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n6nkx\" (UniqueName: \"kubernetes.io/projected/e44bd6e4-ee1c-488f-9b89-90ae9b5880f8-kube-api-access-n6nkx\") pod \"storage-provisioner\" (UID: \"e44bd6e4-ee1c-488f-9b89-90ae9b5880f8\") " pod="kube-system/storage-provisioner"
	May 31 19:08:45 multinode-025078 kubelet[1383]: I0531 19:08:45.285718    1383 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/e44bd6e4-ee1c-488f-9b89-90ae9b5880f8-tmp\") pod \"storage-provisioner\" (UID: \"e44bd6e4-ee1c-488f-9b89-90ae9b5880f8\") " pod="kube-system/storage-provisioner"
	May 31 19:08:45 multinode-025078 kubelet[1383]: W0531 19:08:45.523945    1383 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/9d3cbea3acdb0b8d1b32dece59d31af9111f937e060f05b7083a59033f8e8705/crio/crio-3b8ff3d34c2743c166673aa3c70b3317dd3fb403d1a233b1010b693fc08788ba WatchSource:0}: Error finding container 3b8ff3d34c2743c166673aa3c70b3317dd3fb403d1a233b1010b693fc08788ba: Status 404 returned error can't find the container with id 3b8ff3d34c2743c166673aa3c70b3317dd3fb403d1a233b1010b693fc08788ba
	May 31 19:08:45 multinode-025078 kubelet[1383]: E0531 19:08:45.682552    1383 configmap.go:199] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition
	May 31 19:08:45 multinode-025078 kubelet[1383]: E0531 19:08:45.682649    1383 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/3f927f21-c6c7-43a2-a635-4fd3c672172d-config-volume podName:3f927f21-c6c7-43a2-a635-4fd3c672172d nodeName:}" failed. No retries permitted until 2023-05-31 19:08:46.182626502 +0000 UTC m=+17.046685334 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/3f927f21-c6c7-43a2-a635-4fd3c672172d-config-volume") pod "coredns-5d78c9869d-hhw4h" (UID: "3f927f21-c6c7-43a2-a635-4fd3c672172d") : failed to sync configmap cache: timed out waiting for the condition
	May 31 19:08:46 multinode-025078 kubelet[1383]: W0531 19:08:46.300602    1383 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/9d3cbea3acdb0b8d1b32dece59d31af9111f937e060f05b7083a59033f8e8705/crio/crio-5e693e7a01cd2130404825e5b85a37bcf90b34ff127fec58e37ed23aaf2dc6d6 WatchSource:0}: Error finding container 5e693e7a01cd2130404825e5b85a37bcf90b34ff127fec58e37ed23aaf2dc6d6: Status 404 returned error can't find the container with id 5e693e7a01cd2130404825e5b85a37bcf90b34ff127fec58e37ed23aaf2dc6d6
	May 31 19:08:47 multinode-025078 kubelet[1383]: I0531 19:08:47.401062    1383 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=3.401021328 podCreationTimestamp="2023-05-31 19:08:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-05-31 19:08:46.403043101 +0000 UTC m=+17.267101941" watchObservedRunningTime="2023-05-31 19:08:47.401021328 +0000 UTC m=+18.265080160"
	May 31 19:08:47 multinode-025078 kubelet[1383]: I0531 19:08:47.418593    1383 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5d78c9869d-hhw4h" podStartSLOduration=4.418551768 podCreationTimestamp="2023-05-31 19:08:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-05-31 19:08:47.402606184 +0000 UTC m=+18.266665024" watchObservedRunningTime="2023-05-31 19:08:47.418551768 +0000 UTC m=+18.282610641"
	May 31 19:09:08 multinode-025078 kubelet[1383]: I0531 19:09:08.029047    1383 topology_manager.go:212] "Topology Admit Handler"
	May 31 19:09:08 multinode-025078 kubelet[1383]: W0531 19:09:08.042918    1383 reflector.go:533] object-"default"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:multinode-025078" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'multinode-025078' and this object
	May 31 19:09:08 multinode-025078 kubelet[1383]: E0531 19:09:08.042961    1383 reflector.go:148] object-"default"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:multinode-025078" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'multinode-025078' and this object
	May 31 19:09:08 multinode-025078 kubelet[1383]: I0531 19:09:08.166619    1383 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fr2h5\" (UniqueName: \"kubernetes.io/projected/7b68836a-b29a-467a-8a9c-a17fb53833b2-kube-api-access-fr2h5\") pod \"busybox-67b7f59bb-fn4vn\" (UID: \"7b68836a-b29a-467a-8a9c-a17fb53833b2\") " pod="default/busybox-67b7f59bb-fn4vn"
	May 31 19:09:09 multinode-025078 kubelet[1383]: W0531 19:09:09.261486    1383 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/9d3cbea3acdb0b8d1b32dece59d31af9111f937e060f05b7083a59033f8e8705/crio/crio-db5fb96ba1c6dbc5a78c55ecf40355622b12ca5808b05a6b88177761569e1077 WatchSource:0}: Error finding container db5fb96ba1c6dbc5a78c55ecf40355622b12ca5808b05a6b88177761569e1077: Status 404 returned error can't find the container with id db5fb96ba1c6dbc5a78c55ecf40355622b12ca5808b05a6b88177761569e1077
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p multinode-025078 -n multinode-025078
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-025078 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (4.00s)

                                                
                                    
x
+
TestPreload (172.17s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-171645 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4
E0531 19:15:18.520592    7804 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/functional-747104/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-171645 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4: (1m28.47362135s)
preload_test.go:57: (dbg) Run:  out/minikube-linux-arm64 ssh -p test-preload-171645 -- sudo crictl pull gcr.io/k8s-minikube/busybox
preload_test.go:57: (dbg) Done: out/minikube-linux-arm64 ssh -p test-preload-171645 -- sudo crictl pull gcr.io/k8s-minikube/busybox: (2.204424827s)
preload_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-171645
preload_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-171645: (5.844216558s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-171645 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
E0531 19:16:41.564178    7804 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/functional-747104/client.crt: no such file or directory
E0531 19:17:05.253931    7804 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/addons-748280/client.crt: no such file or directory
preload_test.go:71: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-171645 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (1m10.603037886s)
preload_test.go:80: (dbg) Run:  out/minikube-linux-arm64 ssh -p test-preload-171645 -- sudo crictl image ls
preload_test.go:85: Expected to find gcr.io/k8s-minikube/busybox in image list output, instead got 
-- stdout --
	IMAGE               TAG                 IMAGE ID            SIZE

                                                
                                                
-- /stdout --
panic.go:522: *** TestPreload FAILED at 2023-05-31 19:17:50.850985146 +0000 UTC m=+2027.483893914
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestPreload]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect test-preload-171645
helpers_test.go:235: (dbg) docker inspect test-preload-171645:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e5f2892ff524db7d65c484d67edc34bdf09192ed243f8ed7b2d47a248355b354",
	        "Created": "2023-05-31T19:15:04.897721267Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 100032,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-05-31T19:16:49.494519479Z",
	            "FinishedAt": "2023-05-31T19:16:39.6717415Z"
	        },
	        "Image": "sha256:dee4774de4f99268e16c76379a36a4607bed47635a069a7e60c17cd24d9aaa76",
	        "ResolvConfPath": "/var/lib/docker/containers/e5f2892ff524db7d65c484d67edc34bdf09192ed243f8ed7b2d47a248355b354/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e5f2892ff524db7d65c484d67edc34bdf09192ed243f8ed7b2d47a248355b354/hostname",
	        "HostsPath": "/var/lib/docker/containers/e5f2892ff524db7d65c484d67edc34bdf09192ed243f8ed7b2d47a248355b354/hosts",
	        "LogPath": "/var/lib/docker/containers/e5f2892ff524db7d65c484d67edc34bdf09192ed243f8ed7b2d47a248355b354/e5f2892ff524db7d65c484d67edc34bdf09192ed243f8ed7b2d47a248355b354-json.log",
	        "Name": "/test-preload-171645",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "test-preload-171645:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "test-preload-171645",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/23a4124b90f180e2b3ad1e08617eef211d4fdea0ac30caa414adaa1de9243036-init/diff:/var/lib/docker/overlay2/548bced7e749d102323bab71db162b075785f916e2a896d29f3adc2c3d7fbea8/diff",
	                "MergedDir": "/var/lib/docker/overlay2/23a4124b90f180e2b3ad1e08617eef211d4fdea0ac30caa414adaa1de9243036/merged",
	                "UpperDir": "/var/lib/docker/overlay2/23a4124b90f180e2b3ad1e08617eef211d4fdea0ac30caa414adaa1de9243036/diff",
	                "WorkDir": "/var/lib/docker/overlay2/23a4124b90f180e2b3ad1e08617eef211d4fdea0ac30caa414adaa1de9243036/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "test-preload-171645",
	                "Source": "/var/lib/docker/volumes/test-preload-171645/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "test-preload-171645",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "test-preload-171645",
	                "name.minikube.sigs.k8s.io": "test-preload-171645",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "d32a710f9852a6660d93019436f9bfcbff8c0974a21487f953f70d56dda36061",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32904"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32903"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32900"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32902"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32901"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/d32a710f9852",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "test-preload-171645": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.67.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "e5f2892ff524",
	                        "test-preload-171645"
	                    ],
	                    "NetworkID": "cbe7394e6a50e9119e8ef195b066310898649016367c4c62abf6a05653e68cff",
	                    "EndpointID": "2fee221ed29dc94fbe5e4f19198b5edef5efa9fcfe854cab5993edf5cd9b0226",
	                    "Gateway": "192.168.67.1",
	                    "IPAddress": "192.168.67.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:43:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p test-preload-171645 -n test-preload-171645
helpers_test.go:244: <<< TestPreload FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPreload]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-171645 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p test-preload-171645 logs -n 25: (1.384037341s)
helpers_test.go:252: TestPreload logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-025078 ssh -n                                                                 | multinode-025078     | jenkins | v1.30.1 | 31 May 23 19:10 UTC | 31 May 23 19:10 UTC |
	|         | multinode-025078-m03 sudo cat                                                           |                      |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |         |                     |                     |
	| ssh     | multinode-025078 ssh -n multinode-025078 sudo cat                                       | multinode-025078     | jenkins | v1.30.1 | 31 May 23 19:10 UTC | 31 May 23 19:10 UTC |
	|         | /home/docker/cp-test_multinode-025078-m03_multinode-025078.txt                          |                      |         |         |                     |                     |
	| cp      | multinode-025078 cp multinode-025078-m03:/home/docker/cp-test.txt                       | multinode-025078     | jenkins | v1.30.1 | 31 May 23 19:10 UTC | 31 May 23 19:10 UTC |
	|         | multinode-025078-m02:/home/docker/cp-test_multinode-025078-m03_multinode-025078-m02.txt |                      |         |         |                     |                     |
	| ssh     | multinode-025078 ssh -n                                                                 | multinode-025078     | jenkins | v1.30.1 | 31 May 23 19:10 UTC | 31 May 23 19:10 UTC |
	|         | multinode-025078-m03 sudo cat                                                           |                      |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |         |                     |                     |
	| ssh     | multinode-025078 ssh -n multinode-025078-m02 sudo cat                                   | multinode-025078     | jenkins | v1.30.1 | 31 May 23 19:10 UTC | 31 May 23 19:10 UTC |
	|         | /home/docker/cp-test_multinode-025078-m03_multinode-025078-m02.txt                      |                      |         |         |                     |                     |
	| node    | multinode-025078 node stop m03                                                          | multinode-025078     | jenkins | v1.30.1 | 31 May 23 19:10 UTC | 31 May 23 19:10 UTC |
	| node    | multinode-025078 node start                                                             | multinode-025078     | jenkins | v1.30.1 | 31 May 23 19:10 UTC | 31 May 23 19:10 UTC |
	|         | m03 --alsologtostderr                                                                   |                      |         |         |                     |                     |
	| node    | list -p multinode-025078                                                                | multinode-025078     | jenkins | v1.30.1 | 31 May 23 19:10 UTC |                     |
	| stop    | -p multinode-025078                                                                     | multinode-025078     | jenkins | v1.30.1 | 31 May 23 19:10 UTC | 31 May 23 19:10 UTC |
	| start   | -p multinode-025078                                                                     | multinode-025078     | jenkins | v1.30.1 | 31 May 23 19:10 UTC | 31 May 23 19:12 UTC |
	|         | --wait=true -v=8                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                      |         |         |                     |                     |
	| node    | list -p multinode-025078                                                                | multinode-025078     | jenkins | v1.30.1 | 31 May 23 19:12 UTC |                     |
	| node    | multinode-025078 node delete                                                            | multinode-025078     | jenkins | v1.30.1 | 31 May 23 19:12 UTC | 31 May 23 19:12 UTC |
	|         | m03                                                                                     |                      |         |         |                     |                     |
	| stop    | multinode-025078 stop                                                                   | multinode-025078     | jenkins | v1.30.1 | 31 May 23 19:12 UTC | 31 May 23 19:12 UTC |
	| start   | -p multinode-025078                                                                     | multinode-025078     | jenkins | v1.30.1 | 31 May 23 19:12 UTC | 31 May 23 19:14 UTC |
	|         | --wait=true -v=8                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                      |         |         |                     |                     |
	|         | --driver=docker                                                                         |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| node    | list -p multinode-025078                                                                | multinode-025078     | jenkins | v1.30.1 | 31 May 23 19:14 UTC |                     |
	| start   | -p multinode-025078-m02                                                                 | multinode-025078-m02 | jenkins | v1.30.1 | 31 May 23 19:14 UTC |                     |
	|         | --driver=docker                                                                         |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| start   | -p multinode-025078-m03                                                                 | multinode-025078-m03 | jenkins | v1.30.1 | 31 May 23 19:14 UTC | 31 May 23 19:14 UTC |
	|         | --driver=docker                                                                         |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| node    | add -p multinode-025078                                                                 | multinode-025078     | jenkins | v1.30.1 | 31 May 23 19:14 UTC |                     |
	| delete  | -p multinode-025078-m03                                                                 | multinode-025078-m03 | jenkins | v1.30.1 | 31 May 23 19:14 UTC | 31 May 23 19:14 UTC |
	| delete  | -p multinode-025078                                                                     | multinode-025078     | jenkins | v1.30.1 | 31 May 23 19:14 UTC | 31 May 23 19:15 UTC |
	| start   | -p test-preload-171645                                                                  | test-preload-171645  | jenkins | v1.30.1 | 31 May 23 19:15 UTC | 31 May 23 19:16 UTC |
	|         | --memory=2200                                                                           |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                      |         |         |                     |                     |
	|         | --wait=true --preload=false                                                             |                      |         |         |                     |                     |
	|         | --driver=docker                                                                         |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.4                                                            |                      |         |         |                     |                     |
	| ssh     | -p test-preload-171645                                                                  | test-preload-171645  | jenkins | v1.30.1 | 31 May 23 19:16 UTC | 31 May 23 19:16 UTC |
	|         | -- sudo crictl pull                                                                     |                      |         |         |                     |                     |
	|         | gcr.io/k8s-minikube/busybox                                                             |                      |         |         |                     |                     |
	| stop    | -p test-preload-171645                                                                  | test-preload-171645  | jenkins | v1.30.1 | 31 May 23 19:16 UTC | 31 May 23 19:16 UTC |
	| start   | -p test-preload-171645                                                                  | test-preload-171645  | jenkins | v1.30.1 | 31 May 23 19:16 UTC | 31 May 23 19:17 UTC |
	|         | --memory=2200                                                                           |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                  |                      |         |         |                     |                     |
	|         | --wait=true --driver=docker                                                             |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| ssh     | -p test-preload-171645 -- sudo                                                          | test-preload-171645  | jenkins | v1.30.1 | 31 May 23 19:17 UTC | 31 May 23 19:17 UTC |
	|         | crictl image ls                                                                         |                      |         |         |                     |                     |
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/05/31 19:16:39
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.20.4 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0531 19:16:39.970037   99837 out.go:296] Setting OutFile to fd 1 ...
	I0531 19:16:39.970237   99837 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0531 19:16:39.970263   99837 out.go:309] Setting ErrFile to fd 2...
	I0531 19:16:39.970286   99837 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0531 19:16:39.970551   99837 root.go:336] Updating PATH: /home/jenkins/minikube-integration/16569-2389/.minikube/bin
	I0531 19:16:39.971083   99837 out.go:303] Setting JSON to false
	I0531 19:16:39.972078   99837 start.go:127] hostinfo: {"hostname":"ip-172-31-21-244","uptime":3545,"bootTime":1685557055,"procs":273,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1037-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0531 19:16:39.972199   99837 start.go:137] virtualization:  
	I0531 19:16:39.975154   99837 out.go:177] * [test-preload-171645] minikube v1.30.1 on Ubuntu 20.04 (arm64)
	I0531 19:16:39.977341   99837 out.go:177]   - MINIKUBE_LOCATION=16569
	I0531 19:16:39.979327   99837 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0531 19:16:39.977456   99837 notify.go:220] Checking for updates...
	I0531 19:16:39.981168   99837 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/16569-2389/kubeconfig
	I0531 19:16:39.983137   99837 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/16569-2389/.minikube
	I0531 19:16:39.984882   99837 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0531 19:16:39.986802   99837 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0531 19:16:39.989263   99837 config.go:182] Loaded profile config "test-preload-171645": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0531 19:16:39.992046   99837 out.go:177] * Kubernetes 1.27.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.27.2
	I0531 19:16:39.993860   99837 driver.go:375] Setting default libvirt URI to qemu:///system
	I0531 19:16:40.025673   99837 docker.go:121] docker version: linux-24.0.2:Docker Engine - Community
	I0531 19:16:40.025775   99837 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0531 19:16:40.114021   99837 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:30 OomKillDisable:true NGoroutines:35 SystemTime:2023-05-31 19:16:40.103446539 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1037-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215166976 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.5] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.18.1]] Warnings:<nil>}}
	I0531 19:16:40.114137   99837 docker.go:294] overlay module found
	I0531 19:16:40.117031   99837 out.go:177] * Using the docker driver based on existing profile
	I0531 19:16:40.118804   99837 start.go:297] selected driver: docker
	I0531 19:16:40.118828   99837 start.go:875] validating driver "docker" against &{Name:test-preload-171645 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-171645 Namespace:default APIServerName:mi
nikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0531 19:16:40.118953   99837 start.go:886] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0531 19:16:40.119621   99837 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0531 19:16:40.193639   99837 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:30 OomKillDisable:true NGoroutines:35 SystemTime:2023-05-31 19:16:40.183551262 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1037-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215166976 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.5] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.18.1]] Warnings:<nil>}}
	I0531 19:16:40.193955   99837 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0531 19:16:40.193981   99837 cni.go:84] Creating CNI manager for ""
	I0531 19:16:40.193991   99837 cni.go:142] "docker" driver + "crio" runtime found, recommending kindnet
	I0531 19:16:40.194003   99837 start_flags.go:319] config:
	{Name:test-preload-171645 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-171645 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunt
ime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics
:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0531 19:16:40.196897   99837 out.go:177] * Starting control plane node test-preload-171645 in cluster test-preload-171645
	I0531 19:16:40.198672   99837 cache.go:122] Beginning downloading kic base image for docker with crio
	I0531 19:16:40.200200   99837 out.go:177] * Pulling base image ...
	I0531 19:16:40.201616   99837 preload.go:132] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0531 19:16:40.201687   99837 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 in local docker daemon
	I0531 19:16:40.219254   99837 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 in local docker daemon, skipping pull
	I0531 19:16:40.219277   99837 cache.go:145] gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 exists in daemon, skipping load
	I0531 19:16:40.270562   99837 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-arm64.tar.lz4
	I0531 19:16:40.270585   99837 cache.go:57] Caching tarball of preloaded images
	I0531 19:16:40.270770   99837 preload.go:132] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0531 19:16:40.272621   99837 out.go:177] * Downloading Kubernetes v1.24.4 preload ...
	I0531 19:16:40.274209   99837 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-arm64.tar.lz4 ...
	I0531 19:16:40.399246   99837 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-arm64.tar.lz4?checksum=md5:d2db394df12e407c28bb66857d0d812b -> /home/jenkins/minikube-integration/16569-2389/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-arm64.tar.lz4
	I0531 19:16:48.239262   99837 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-arm64.tar.lz4 ...
	I0531 19:16:48.239378   99837 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/16569-2389/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-arm64.tar.lz4 ...
	I0531 19:16:49.114562   99837 cache.go:60] Finished verifying existence of preloaded tar for  v1.24.4 on crio
	I0531 19:16:49.114715   99837 profile.go:148] Saving config to /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/test-preload-171645/config.json ...
	I0531 19:16:49.114971   99837 cache.go:195] Successfully downloaded all kic artifacts
	I0531 19:16:49.115020   99837 start.go:364] acquiring machines lock for test-preload-171645: {Name:mkcf7ba89c058a85c22502f724db7c1be47803a9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0531 19:16:49.115086   99837 start.go:368] acquired machines lock for "test-preload-171645" in 41.14µs
	I0531 19:16:49.115098   99837 start.go:96] Skipping create...Using existing machine configuration
	I0531 19:16:49.115108   99837 fix.go:55] fixHost starting: 
	I0531 19:16:49.115377   99837 cli_runner.go:164] Run: docker container inspect test-preload-171645 --format={{.State.Status}}
	I0531 19:16:49.137125   99837 fix.go:103] recreateIfNeeded on test-preload-171645: state=Stopped err=<nil>
	W0531 19:16:49.137155   99837 fix.go:129] unexpected machine state, will restart: <nil>
	I0531 19:16:49.139602   99837 out.go:177] * Restarting existing docker container for "test-preload-171645" ...
	I0531 19:16:49.141384   99837 cli_runner.go:164] Run: docker start test-preload-171645
	I0531 19:16:49.502710   99837 cli_runner.go:164] Run: docker container inspect test-preload-171645 --format={{.State.Status}}
	I0531 19:16:49.529400   99837 kic.go:426] container "test-preload-171645" state is running.
	I0531 19:16:49.529793   99837 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" test-preload-171645
	I0531 19:16:49.555575   99837 profile.go:148] Saving config to /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/test-preload-171645/config.json ...
	I0531 19:16:49.555808   99837 machine.go:88] provisioning docker machine ...
	I0531 19:16:49.555831   99837 ubuntu.go:169] provisioning hostname "test-preload-171645"
	I0531 19:16:49.555879   99837 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-171645
	I0531 19:16:49.575600   99837 main.go:141] libmachine: Using SSH client type: native
	I0531 19:16:49.576216   99837 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x39eaa0] 0x3a1430 <nil>  [] 0s} 127.0.0.1 32904 <nil> <nil>}
	I0531 19:16:49.576241   99837 main.go:141] libmachine: About to run SSH command:
	sudo hostname test-preload-171645 && echo "test-preload-171645" | sudo tee /etc/hostname
	I0531 19:16:49.576757   99837 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:50960->127.0.0.1:32904: read: connection reset by peer
	I0531 19:16:52.721335   99837 main.go:141] libmachine: SSH cmd err, output: <nil>: test-preload-171645
	
	I0531 19:16:52.721424   99837 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-171645
	I0531 19:16:52.741024   99837 main.go:141] libmachine: Using SSH client type: native
	I0531 19:16:52.741468   99837 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x39eaa0] 0x3a1430 <nil>  [] 0s} 127.0.0.1 32904 <nil> <nil>}
	I0531 19:16:52.741492   99837 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\stest-preload-171645' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 test-preload-171645/g' /etc/hosts;
				else 
					echo '127.0.1.1 test-preload-171645' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0531 19:16:52.867900   99837 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0531 19:16:52.867926   99837 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/16569-2389/.minikube CaCertPath:/home/jenkins/minikube-integration/16569-2389/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16569-2389/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16569-2389/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16569-2389/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16569-2389/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16569-2389/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16569-2389/.minikube}
	I0531 19:16:52.867954   99837 ubuntu.go:177] setting up certificates
	I0531 19:16:52.867966   99837 provision.go:83] configureAuth start
	I0531 19:16:52.868028   99837 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" test-preload-171645
	I0531 19:16:52.887803   99837 provision.go:138] copyHostCerts
	I0531 19:16:52.887897   99837 exec_runner.go:144] found /home/jenkins/minikube-integration/16569-2389/.minikube/key.pem, removing ...
	I0531 19:16:52.887906   99837 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16569-2389/.minikube/key.pem
	I0531 19:16:52.887988   99837 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16569-2389/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16569-2389/.minikube/key.pem (1679 bytes)
	I0531 19:16:52.888092   99837 exec_runner.go:144] found /home/jenkins/minikube-integration/16569-2389/.minikube/ca.pem, removing ...
	I0531 19:16:52.888097   99837 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16569-2389/.minikube/ca.pem
	I0531 19:16:52.888122   99837 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16569-2389/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16569-2389/.minikube/ca.pem (1078 bytes)
	I0531 19:16:52.888175   99837 exec_runner.go:144] found /home/jenkins/minikube-integration/16569-2389/.minikube/cert.pem, removing ...
	I0531 19:16:52.888181   99837 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16569-2389/.minikube/cert.pem
	I0531 19:16:52.888205   99837 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16569-2389/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16569-2389/.minikube/cert.pem (1123 bytes)
	I0531 19:16:52.888248   99837 provision.go:112] generating server cert: /home/jenkins/minikube-integration/16569-2389/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16569-2389/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16569-2389/.minikube/certs/ca-key.pem org=jenkins.test-preload-171645 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube test-preload-171645]
	I0531 19:16:53.790840   99837 provision.go:172] copyRemoteCerts
	I0531 19:16:53.790909   99837 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0531 19:16:53.790953   99837 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-171645
	I0531 19:16:53.816293   99837 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32904 SSHKeyPath:/home/jenkins/minikube-integration/16569-2389/.minikube/machines/test-preload-171645/id_rsa Username:docker}
	I0531 19:16:53.909471   99837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16569-2389/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0531 19:16:53.938253   99837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16569-2389/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0531 19:16:53.967014   99837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16569-2389/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0531 19:16:53.995774   99837 provision.go:86] duration metric: configureAuth took 1.127793804s
	I0531 19:16:53.995800   99837 ubuntu.go:193] setting minikube options for container-runtime
	I0531 19:16:53.995986   99837 config.go:182] Loaded profile config "test-preload-171645": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0531 19:16:53.996100   99837 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-171645
	I0531 19:16:54.014950   99837 main.go:141] libmachine: Using SSH client type: native
	I0531 19:16:54.015402   99837 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x39eaa0] 0x3a1430 <nil>  [] 0s} 127.0.0.1 32904 <nil> <nil>}
	I0531 19:16:54.015425   99837 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0531 19:16:54.339837   99837 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0531 19:16:54.339861   99837 machine.go:91] provisioned docker machine in 4.784037616s
	I0531 19:16:54.339872   99837 start.go:300] post-start starting for "test-preload-171645" (driver="docker")
	I0531 19:16:54.339878   99837 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0531 19:16:54.339950   99837 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0531 19:16:54.339993   99837 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-171645
	I0531 19:16:54.369891   99837 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32904 SSHKeyPath:/home/jenkins/minikube-integration/16569-2389/.minikube/machines/test-preload-171645/id_rsa Username:docker}
	I0531 19:16:54.466497   99837 ssh_runner.go:195] Run: cat /etc/os-release
	I0531 19:16:54.471154   99837 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0531 19:16:54.471261   99837 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0531 19:16:54.471278   99837 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0531 19:16:54.471285   99837 info.go:137] Remote host: Ubuntu 22.04.2 LTS
	I0531 19:16:54.471303   99837 filesync.go:126] Scanning /home/jenkins/minikube-integration/16569-2389/.minikube/addons for local assets ...
	I0531 19:16:54.471383   99837 filesync.go:126] Scanning /home/jenkins/minikube-integration/16569-2389/.minikube/files for local assets ...
	I0531 19:16:54.471473   99837 filesync.go:149] local asset: /home/jenkins/minikube-integration/16569-2389/.minikube/files/etc/ssl/certs/78042.pem -> 78042.pem in /etc/ssl/certs
	I0531 19:16:54.471589   99837 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0531 19:16:54.482575   99837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16569-2389/.minikube/files/etc/ssl/certs/78042.pem --> /etc/ssl/certs/78042.pem (1708 bytes)
	I0531 19:16:54.513195   99837 start.go:303] post-start completed in 173.309169ms
	I0531 19:16:54.513273   99837 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0531 19:16:54.513328   99837 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-171645
	I0531 19:16:54.532508   99837 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32904 SSHKeyPath:/home/jenkins/minikube-integration/16569-2389/.minikube/machines/test-preload-171645/id_rsa Username:docker}
	I0531 19:16:54.624819   99837 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0531 19:16:54.630775   99837 fix.go:57] fixHost completed within 5.515659832s
	I0531 19:16:54.630798   99837 start.go:83] releasing machines lock for "test-preload-171645", held for 5.515703171s
	I0531 19:16:54.630873   99837 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" test-preload-171645
	I0531 19:16:54.648795   99837 ssh_runner.go:195] Run: cat /version.json
	I0531 19:16:54.648858   99837 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-171645
	I0531 19:16:54.649129   99837 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0531 19:16:54.649189   99837 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-171645
	I0531 19:16:54.668405   99837 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32904 SSHKeyPath:/home/jenkins/minikube-integration/16569-2389/.minikube/machines/test-preload-171645/id_rsa Username:docker}
	I0531 19:16:54.675842   99837 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32904 SSHKeyPath:/home/jenkins/minikube-integration/16569-2389/.minikube/machines/test-preload-171645/id_rsa Username:docker}
	I0531 19:16:54.763255   99837 ssh_runner.go:195] Run: systemctl --version
	I0531 19:16:54.910159   99837 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0531 19:16:55.060111   99837 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0531 19:16:55.065906   99837 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0531 19:16:55.077332   99837 cni.go:220] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0531 19:16:55.077453   99837 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0531 19:16:55.090123   99837 cni.go:258] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0531 19:16:55.090148   99837 start.go:481] detecting cgroup driver to use...
	I0531 19:16:55.090214   99837 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0531 19:16:55.090284   99837 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0531 19:16:55.105642   99837 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0531 19:16:55.120280   99837 docker.go:193] disabling cri-docker service (if available) ...
	I0531 19:16:55.120366   99837 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0531 19:16:55.136616   99837 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0531 19:16:55.151366   99837 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0531 19:16:55.242831   99837 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0531 19:16:55.335019   99837 docker.go:209] disabling docker service ...
	I0531 19:16:55.335132   99837 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0531 19:16:55.349807   99837 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0531 19:16:55.363545   99837 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0531 19:16:55.452658   99837 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0531 19:16:55.558937   99837 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0531 19:16:55.572203   99837 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0531 19:16:55.591991   99837 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.7" pause image...
	I0531 19:16:55.592071   99837 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.7"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0531 19:16:55.608911   99837 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0531 19:16:55.609024   99837 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0531 19:16:55.620919   99837 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0531 19:16:55.633147   99837 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0531 19:16:55.645442   99837 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0531 19:16:55.656928   99837 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0531 19:16:55.667480   99837 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0531 19:16:55.678345   99837 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0531 19:16:55.766771   99837 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0531 19:16:55.893229   99837 start.go:528] Will wait 60s for socket path /var/run/crio/crio.sock
	I0531 19:16:55.893320   99837 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0531 19:16:55.898168   99837 start.go:549] Will wait 60s for crictl version
	I0531 19:16:55.898278   99837 ssh_runner.go:195] Run: which crictl
	I0531 19:16:55.902866   99837 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0531 19:16:55.943769   99837 start.go:565] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.5
	RuntimeApiVersion:  v1
	I0531 19:16:55.943877   99837 ssh_runner.go:195] Run: crio --version
	I0531 19:16:55.993069   99837 ssh_runner.go:195] Run: crio --version
	I0531 19:16:56.042822   99837 out.go:177] * Preparing Kubernetes v1.24.4 on CRI-O 1.24.5 ...
	I0531 19:16:56.045050   99837 cli_runner.go:164] Run: docker network inspect test-preload-171645 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0531 19:16:56.063201   99837 ssh_runner.go:195] Run: grep 192.168.67.1	host.minikube.internal$ /etc/hosts
	I0531 19:16:56.068141   99837 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.67.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0531 19:16:56.082284   99837 preload.go:132] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0531 19:16:56.082357   99837 ssh_runner.go:195] Run: sudo crictl images --output json
	I0531 19:16:56.132461   99837 crio.go:496] all images are preloaded for cri-o runtime.
	I0531 19:16:56.132483   99837 crio.go:415] Images already preloaded, skipping extraction
	I0531 19:16:56.132552   99837 ssh_runner.go:195] Run: sudo crictl images --output json
	I0531 19:16:56.175783   99837 crio.go:496] all images are preloaded for cri-o runtime.
	I0531 19:16:56.175807   99837 cache_images.go:84] Images are preloaded, skipping loading
	I0531 19:16:56.175911   99837 ssh_runner.go:195] Run: crio config
	I0531 19:16:56.257779   99837 cni.go:84] Creating CNI manager for ""
	I0531 19:16:56.257802   99837 cni.go:142] "docker" driver + "crio" runtime found, recommending kindnet
	I0531 19:16:56.257814   99837 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0531 19:16:56.257862   99837 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.24.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:test-preload-171645 NodeName:test-preload-171645 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.67.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0531 19:16:56.258046   99837 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "test-preload-171645"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0531 19:16:56.258146   99837 kubeadm.go:971] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=test-preload-171645 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.4 ClusterName:test-preload-171645 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0531 19:16:56.258240   99837 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.4
	I0531 19:16:56.269615   99837 binaries.go:44] Found k8s binaries, skipping transfer
	I0531 19:16:56.269727   99837 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0531 19:16:56.280467   99837 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0531 19:16:56.302859   99837 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0531 19:16:56.324482   99837 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2100 bytes)
	I0531 19:16:56.345734   99837 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I0531 19:16:56.350121   99837 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0531 19:16:56.363204   99837 certs.go:56] Setting up /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/test-preload-171645 for IP: 192.168.67.2
	I0531 19:16:56.363235   99837 certs.go:190] acquiring lock for shared ca certs: {Name:mk0147accf8b8da231d39646bdc89fced67451cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 19:16:56.363374   99837 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/16569-2389/.minikube/ca.key
	I0531 19:16:56.363423   99837 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/16569-2389/.minikube/proxy-client-ca.key
	I0531 19:16:56.363498   99837 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/test-preload-171645/client.key
	I0531 19:16:56.363564   99837 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/test-preload-171645/apiserver.key.c7fa3a9e
	I0531 19:16:56.363612   99837 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/test-preload-171645/proxy-client.key
	I0531 19:16:56.363725   99837 certs.go:437] found cert: /home/jenkins/minikube-integration/16569-2389/.minikube/certs/home/jenkins/minikube-integration/16569-2389/.minikube/certs/7804.pem (1338 bytes)
	W0531 19:16:56.363758   99837 certs.go:433] ignoring /home/jenkins/minikube-integration/16569-2389/.minikube/certs/home/jenkins/minikube-integration/16569-2389/.minikube/certs/7804_empty.pem, impossibly tiny 0 bytes
	I0531 19:16:56.363772   99837 certs.go:437] found cert: /home/jenkins/minikube-integration/16569-2389/.minikube/certs/home/jenkins/minikube-integration/16569-2389/.minikube/certs/ca-key.pem (1675 bytes)
	I0531 19:16:56.363796   99837 certs.go:437] found cert: /home/jenkins/minikube-integration/16569-2389/.minikube/certs/home/jenkins/minikube-integration/16569-2389/.minikube/certs/ca.pem (1078 bytes)
	I0531 19:16:56.363825   99837 certs.go:437] found cert: /home/jenkins/minikube-integration/16569-2389/.minikube/certs/home/jenkins/minikube-integration/16569-2389/.minikube/certs/cert.pem (1123 bytes)
	I0531 19:16:56.363858   99837 certs.go:437] found cert: /home/jenkins/minikube-integration/16569-2389/.minikube/certs/home/jenkins/minikube-integration/16569-2389/.minikube/certs/key.pem (1679 bytes)
	I0531 19:16:56.363917   99837 certs.go:437] found cert: /home/jenkins/minikube-integration/16569-2389/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/16569-2389/.minikube/files/etc/ssl/certs/78042.pem (1708 bytes)
	I0531 19:16:56.364523   99837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/test-preload-171645/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0531 19:16:56.394013   99837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/test-preload-171645/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0531 19:16:56.422895   99837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/test-preload-171645/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0531 19:16:56.451405   99837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/test-preload-171645/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0531 19:16:56.479861   99837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16569-2389/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0531 19:16:56.510044   99837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16569-2389/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0531 19:16:56.538467   99837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16569-2389/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0531 19:16:56.567703   99837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16569-2389/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0531 19:16:56.598568   99837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16569-2389/.minikube/files/etc/ssl/certs/78042.pem --> /usr/share/ca-certificates/78042.pem (1708 bytes)
	I0531 19:16:56.628867   99837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16569-2389/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0531 19:16:56.657576   99837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16569-2389/.minikube/certs/7804.pem --> /usr/share/ca-certificates/7804.pem (1338 bytes)
	I0531 19:16:56.686373   99837 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0531 19:16:56.708342   99837 ssh_runner.go:195] Run: openssl version
	I0531 19:16:56.715403   99837 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/78042.pem && ln -fs /usr/share/ca-certificates/78042.pem /etc/ssl/certs/78042.pem"
	I0531 19:16:56.727790   99837 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/78042.pem
	I0531 19:16:56.732404   99837 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 May 31 18:52 /usr/share/ca-certificates/78042.pem
	I0531 19:16:56.732469   99837 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/78042.pem
	I0531 19:16:56.740890   99837 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/78042.pem /etc/ssl/certs/3ec20f2e.0"
	I0531 19:16:56.752363   99837 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0531 19:16:56.764171   99837 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0531 19:16:56.768688   99837 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 May 31 18:45 /usr/share/ca-certificates/minikubeCA.pem
	I0531 19:16:56.768795   99837 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0531 19:16:56.777552   99837 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0531 19:16:56.788811   99837 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7804.pem && ln -fs /usr/share/ca-certificates/7804.pem /etc/ssl/certs/7804.pem"
	I0531 19:16:56.800571   99837 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7804.pem
	I0531 19:16:56.805268   99837 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 May 31 18:52 /usr/share/ca-certificates/7804.pem
	I0531 19:16:56.805357   99837 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7804.pem
	I0531 19:16:56.814444   99837 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7804.pem /etc/ssl/certs/51391683.0"
	I0531 19:16:56.825338   99837 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0531 19:16:56.829802   99837 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0531 19:16:56.838311   99837 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0531 19:16:56.847053   99837 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0531 19:16:56.855603   99837 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0531 19:16:56.863916   99837 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0531 19:16:56.872268   99837 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0531 19:16:56.880685   99837 kubeadm.go:404] StartCluster: {Name:test-preload-171645 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-171645 Namespace:default APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:do
cker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0531 19:16:56.880791   99837 cri.go:53] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0531 19:16:56.880870   99837 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0531 19:16:56.922671   99837 cri.go:88] found id: ""
	I0531 19:16:56.922762   99837 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0531 19:16:56.933033   99837 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0531 19:16:56.933054   99837 kubeadm.go:636] restartCluster start
	I0531 19:16:56.933110   99837 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0531 19:16:56.942933   99837 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0531 19:16:56.943346   99837 kubeconfig.go:135] verify returned: extract IP: "test-preload-171645" does not appear in /home/jenkins/minikube-integration/16569-2389/kubeconfig
	I0531 19:16:56.943450   99837 kubeconfig.go:146] "test-preload-171645" context is missing from /home/jenkins/minikube-integration/16569-2389/kubeconfig - will repair!
	I0531 19:16:56.943739   99837 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16569-2389/kubeconfig: {Name:mk0c7b1a200a0a97aa7bf4307790fd99336ec425 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 19:16:56.944348   99837 kapi.go:59] client config for test-preload-171645: &rest.Config{Host:"https://192.168.67.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/16569-2389/.minikube/profiles/test-preload-171645/client.crt", KeyFile:"/home/jenkins/minikube-integration/16569-2389/.minikube/profiles/test-preload-171645/client.key", CAFile:"/home/jenkins/minikube-integration/16569-2389/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x13dde10), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0531 19:16:56.945222   99837 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0531 19:16:56.956291   99837 api_server.go:166] Checking apiserver status ...
	I0531 19:16:56.956378   99837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 19:16:56.968162   99837 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 19:16:57.468930   99837 api_server.go:166] Checking apiserver status ...
	I0531 19:16:57.469059   99837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 19:16:57.481066   99837 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 19:16:57.968439   99837 api_server.go:166] Checking apiserver status ...
	I0531 19:16:57.968575   99837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 19:16:57.980783   99837 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 19:16:58.468395   99837 api_server.go:166] Checking apiserver status ...
	I0531 19:16:58.468592   99837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 19:16:58.480901   99837 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 19:16:58.968495   99837 api_server.go:166] Checking apiserver status ...
	I0531 19:16:58.968655   99837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 19:16:58.985349   99837 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 19:16:59.469028   99837 api_server.go:166] Checking apiserver status ...
	I0531 19:16:59.469104   99837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 19:16:59.480937   99837 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 19:16:59.968622   99837 api_server.go:166] Checking apiserver status ...
	I0531 19:16:59.968724   99837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 19:16:59.980826   99837 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 19:17:00.468437   99837 api_server.go:166] Checking apiserver status ...
	I0531 19:17:00.468553   99837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 19:17:00.480576   99837 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 19:17:00.969244   99837 api_server.go:166] Checking apiserver status ...
	I0531 19:17:00.969357   99837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 19:17:00.982313   99837 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 19:17:01.468969   99837 api_server.go:166] Checking apiserver status ...
	I0531 19:17:01.469106   99837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 19:17:01.482700   99837 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 19:17:01.968270   99837 api_server.go:166] Checking apiserver status ...
	I0531 19:17:01.968368   99837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 19:17:01.981262   99837 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 19:17:02.468926   99837 api_server.go:166] Checking apiserver status ...
	I0531 19:17:02.469042   99837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 19:17:02.482034   99837 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 19:17:02.968398   99837 api_server.go:166] Checking apiserver status ...
	I0531 19:17:02.968509   99837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 19:17:02.981037   99837 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 19:17:03.468406   99837 api_server.go:166] Checking apiserver status ...
	I0531 19:17:03.468520   99837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 19:17:03.481676   99837 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 19:17:03.968267   99837 api_server.go:166] Checking apiserver status ...
	I0531 19:17:03.968364   99837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 19:17:03.981623   99837 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 19:17:04.469280   99837 api_server.go:166] Checking apiserver status ...
	I0531 19:17:04.469383   99837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 19:17:04.482365   99837 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 19:17:04.969026   99837 api_server.go:166] Checking apiserver status ...
	I0531 19:17:04.969140   99837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 19:17:04.982237   99837 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 19:17:05.468900   99837 api_server.go:166] Checking apiserver status ...
	I0531 19:17:05.468990   99837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 19:17:05.481369   99837 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 19:17:05.969059   99837 api_server.go:166] Checking apiserver status ...
	I0531 19:17:05.969162   99837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 19:17:05.981757   99837 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 19:17:06.468361   99837 api_server.go:166] Checking apiserver status ...
	I0531 19:17:06.468481   99837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 19:17:06.481051   99837 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 19:17:06.957053   99837 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0531 19:17:06.957084   99837 kubeadm.go:1123] stopping kube-system containers ...
	I0531 19:17:06.957097   99837 cri.go:53] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0531 19:17:06.957177   99837 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0531 19:17:07.004773   99837 cri.go:88] found id: ""
	I0531 19:17:07.004856   99837 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0531 19:17:07.019976   99837 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0531 19:17:07.031900   99837 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5643 May 31 19:15 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5656 May 31 19:15 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2015 May 31 19:16 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5604 May 31 19:15 /etc/kubernetes/scheduler.conf
	
	I0531 19:17:07.031978   99837 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0531 19:17:07.043933   99837 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0531 19:17:07.055692   99837 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0531 19:17:07.067329   99837 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0531 19:17:07.067446   99837 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0531 19:17:07.078882   99837 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0531 19:17:07.091293   99837 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0531 19:17:07.091363   99837 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0531 19:17:07.102992   99837 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0531 19:17:07.114692   99837 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0531 19:17:07.114726   99837 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0531 19:17:07.180827   99837 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0531 19:17:09.092203   99837 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.911340118s)
	I0531 19:17:09.092229   99837 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0531 19:17:09.336307   99837 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0531 19:17:09.414369   99837 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0531 19:17:09.501757   99837 api_server.go:52] waiting for apiserver process to appear ...
	I0531 19:17:09.501836   99837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 19:17:10.021813   99837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 19:17:10.521993   99837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 19:17:10.547267   99837 api_server.go:72] duration metric: took 1.04550939s to wait for apiserver process to appear ...
	I0531 19:17:10.547291   99837 api_server.go:88] waiting for apiserver healthz status ...
	I0531 19:17:10.547307   99837 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0531 19:17:15.548255   99837 api_server.go:269] stopped: https://192.168.67.2:8443/healthz: Get "https://192.168.67.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0531 19:17:16.048710   99837 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0531 19:17:16.057673   99837 api_server.go:279] https://192.168.67.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0531 19:17:16.057704   99837 api_server.go:103] status: https://192.168.67.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0531 19:17:16.548840   99837 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0531 19:17:16.559486   99837 api_server.go:279] https://192.168.67.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0531 19:17:16.559516   99837 api_server.go:103] status: https://192.168.67.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0531 19:17:17.049242   99837 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0531 19:17:17.058155   99837 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0531 19:17:17.072987   99837 api_server.go:141] control plane version: v1.24.4
	I0531 19:17:17.073025   99837 api_server.go:131] duration metric: took 6.52572302s to wait for apiserver health ...
	I0531 19:17:17.073035   99837 cni.go:84] Creating CNI manager for ""
	I0531 19:17:17.073041   99837 cni.go:142] "docker" driver + "crio" runtime found, recommending kindnet
	I0531 19:17:17.075020   99837 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0531 19:17:17.076785   99837 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0531 19:17:17.081705   99837 cni.go:181] applying CNI manifest using /var/lib/minikube/binaries/v1.24.4/kubectl ...
	I0531 19:17:17.081727   99837 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0531 19:17:17.104779   99837 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0531 19:17:18.237095   99837 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.24.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.132277897s)
	I0531 19:17:18.237139   99837 system_pods.go:43] waiting for kube-system pods to appear ...
	I0531 19:17:18.246272   99837 system_pods.go:59] 8 kube-system pods found
	I0531 19:17:18.246314   99837 system_pods.go:61] "coredns-6d4b75cb6d-9qms2" [4e791c9c-e1da-441b-970f-0325882ec2f9] Running
	I0531 19:17:18.246364   99837 system_pods.go:61] "etcd-test-preload-171645" [94423385-8681-47c0-ba92-e047cde61a1d] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0531 19:17:18.246384   99837 system_pods.go:61] "kindnet-lfm8z" [3daf2b79-9d3a-4492-b881-64a63c4936e7] Running
	I0531 19:17:18.246394   99837 system_pods.go:61] "kube-apiserver-test-preload-171645" [2b399d40-786b-455a-ba8a-d94ba470f6fe] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0531 19:17:18.246407   99837 system_pods.go:61] "kube-controller-manager-test-preload-171645" [3e35ca1c-d94b-4f04-af0d-14e0aa595fc1] Running
	I0531 19:17:18.246418   99837 system_pods.go:61] "kube-proxy-x885r" [910feb26-d4d3-458f-8fe5-c7cf99b90dc0] Running
	I0531 19:17:18.246448   99837 system_pods.go:61] "kube-scheduler-test-preload-171645" [21180c7d-9e1b-4371-ba27-e7209d0770c9] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0531 19:17:18.246461   99837 system_pods.go:61] "storage-provisioner" [8099d68e-42e6-48d8-b543-2674a3a0d5e1] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0531 19:17:18.246468   99837 system_pods.go:74] duration metric: took 9.322154ms to wait for pod list to return data ...
	I0531 19:17:18.246485   99837 node_conditions.go:102] verifying NodePressure condition ...
	I0531 19:17:18.249971   99837 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0531 19:17:18.250045   99837 node_conditions.go:123] node cpu capacity is 2
	I0531 19:17:18.250063   99837 node_conditions.go:105] duration metric: took 3.572852ms to run NodePressure ...
	I0531 19:17:18.250081   99837 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0531 19:17:18.437967   99837 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0531 19:17:18.447107   99837 kubeadm.go:787] kubelet initialised
	I0531 19:17:18.447138   99837 kubeadm.go:788] duration metric: took 9.142135ms waiting for restarted kubelet to initialise ...
	I0531 19:17:18.447161   99837 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0531 19:17:18.453392   99837 pod_ready.go:78] waiting up to 4m0s for pod "coredns-6d4b75cb6d-9qms2" in "kube-system" namespace to be "Ready" ...
	I0531 19:17:18.459063   99837 pod_ready.go:92] pod "coredns-6d4b75cb6d-9qms2" in "kube-system" namespace has status "Ready":"True"
	I0531 19:17:18.459091   99837 pod_ready.go:81] duration metric: took 5.668802ms waiting for pod "coredns-6d4b75cb6d-9qms2" in "kube-system" namespace to be "Ready" ...
	I0531 19:17:18.459108   99837 pod_ready.go:78] waiting up to 4m0s for pod "etcd-test-preload-171645" in "kube-system" namespace to be "Ready" ...
	I0531 19:17:20.473987   99837 pod_ready.go:102] pod "etcd-test-preload-171645" in "kube-system" namespace has status "Ready":"False"
	I0531 19:17:21.973256   99837 pod_ready.go:92] pod "etcd-test-preload-171645" in "kube-system" namespace has status "Ready":"True"
	I0531 19:17:21.973284   99837 pod_ready.go:81] duration metric: took 3.514168071s waiting for pod "etcd-test-preload-171645" in "kube-system" namespace to be "Ready" ...
	I0531 19:17:21.973300   99837 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-test-preload-171645" in "kube-system" namespace to be "Ready" ...
	I0531 19:17:23.985243   99837 pod_ready.go:102] pod "kube-apiserver-test-preload-171645" in "kube-system" namespace has status "Ready":"False"
	I0531 19:17:26.486313   99837 pod_ready.go:102] pod "kube-apiserver-test-preload-171645" in "kube-system" namespace has status "Ready":"False"
	I0531 19:17:27.485391   99837 pod_ready.go:92] pod "kube-apiserver-test-preload-171645" in "kube-system" namespace has status "Ready":"True"
	I0531 19:17:27.485417   99837 pod_ready.go:81] duration metric: took 5.512109649s waiting for pod "kube-apiserver-test-preload-171645" in "kube-system" namespace to be "Ready" ...
	I0531 19:17:27.485428   99837 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-test-preload-171645" in "kube-system" namespace to be "Ready" ...
	I0531 19:17:29.500471   99837 pod_ready.go:102] pod "kube-controller-manager-test-preload-171645" in "kube-system" namespace has status "Ready":"False"
	I0531 19:17:29.999569   99837 pod_ready.go:92] pod "kube-controller-manager-test-preload-171645" in "kube-system" namespace has status "Ready":"True"
	I0531 19:17:29.999591   99837 pod_ready.go:81] duration metric: took 2.514155331s waiting for pod "kube-controller-manager-test-preload-171645" in "kube-system" namespace to be "Ready" ...
	I0531 19:17:29.999602   99837 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-x885r" in "kube-system" namespace to be "Ready" ...
	I0531 19:17:30.009715   99837 pod_ready.go:92] pod "kube-proxy-x885r" in "kube-system" namespace has status "Ready":"True"
	I0531 19:17:30.009743   99837 pod_ready.go:81] duration metric: took 10.133511ms waiting for pod "kube-proxy-x885r" in "kube-system" namespace to be "Ready" ...
	I0531 19:17:30.009756   99837 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-test-preload-171645" in "kube-system" namespace to be "Ready" ...
	I0531 19:17:30.016933   99837 pod_ready.go:92] pod "kube-scheduler-test-preload-171645" in "kube-system" namespace has status "Ready":"True"
	I0531 19:17:30.017013   99837 pod_ready.go:81] duration metric: took 7.247415ms waiting for pod "kube-scheduler-test-preload-171645" in "kube-system" namespace to be "Ready" ...
	I0531 19:17:30.017042   99837 pod_ready.go:38] duration metric: took 11.569870375s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0531 19:17:30.017102   99837 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0531 19:17:30.029798   99837 ops.go:34] apiserver oom_adj: -16
	I0531 19:17:30.029869   99837 kubeadm.go:640] restartCluster took 33.096806616s
	I0531 19:17:30.029898   99837 kubeadm.go:406] StartCluster complete in 33.149224926s
	I0531 19:17:30.029957   99837 settings.go:142] acquiring lock: {Name:mk7112454687e7bda5617b0aa762b583179f0f78 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 19:17:30.030078   99837 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/16569-2389/kubeconfig
	I0531 19:17:30.034544   99837 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16569-2389/kubeconfig: {Name:mk0c7b1a200a0a97aa7bf4307790fd99336ec425 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 19:17:30.034886   99837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0531 19:17:30.035195   99837 config.go:182] Loaded profile config "test-preload-171645": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0531 19:17:30.035317   99837 addons.go:496] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0531 19:17:30.035404   99837 addons.go:66] Setting storage-provisioner=true in profile "test-preload-171645"
	I0531 19:17:30.035420   99837 addons.go:228] Setting addon storage-provisioner=true in "test-preload-171645"
	W0531 19:17:30.035439   99837 addons.go:237] addon storage-provisioner should already be in state true
	I0531 19:17:30.035483   99837 host.go:66] Checking if "test-preload-171645" exists ...
	I0531 19:17:30.035554   99837 kapi.go:59] client config for test-preload-171645: &rest.Config{Host:"https://192.168.67.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/16569-2389/.minikube/profiles/test-preload-171645/client.crt", KeyFile:"/home/jenkins/minikube-integration/16569-2389/.minikube/profiles/test-preload-171645/client.key", CAFile:"/home/jenkins/minikube-integration/16569-2389/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x13dde10), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0531 19:17:30.035965   99837 cli_runner.go:164] Run: docker container inspect test-preload-171645 --format={{.State.Status}}
	I0531 19:17:30.036067   99837 addons.go:66] Setting default-storageclass=true in profile "test-preload-171645"
	I0531 19:17:30.036088   99837 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "test-preload-171645"
	I0531 19:17:30.036420   99837 cli_runner.go:164] Run: docker container inspect test-preload-171645 --format={{.State.Status}}
	I0531 19:17:30.045041   99837 kapi.go:248] "coredns" deployment in "kube-system" namespace and "test-preload-171645" context rescaled to 1 replicas
	I0531 19:17:30.045086   99837 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0531 19:17:30.048212   99837 out.go:177] * Verifying Kubernetes components...
	I0531 19:17:30.050352   99837 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0531 19:17:30.085708   99837 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0531 19:17:30.087722   99837 addons.go:420] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0531 19:17:30.087744   99837 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0531 19:17:30.087819   99837 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-171645
	I0531 19:17:30.101195   99837 kapi.go:59] client config for test-preload-171645: &rest.Config{Host:"https://192.168.67.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/16569-2389/.minikube/profiles/test-preload-171645/client.crt", KeyFile:"/home/jenkins/minikube-integration/16569-2389/.minikube/profiles/test-preload-171645/client.key", CAFile:"/home/jenkins/minikube-integration/16569-2389/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x13dde10), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0531 19:17:30.115104   99837 addons.go:228] Setting addon default-storageclass=true in "test-preload-171645"
	W0531 19:17:30.115131   99837 addons.go:237] addon default-storageclass should already be in state true
	I0531 19:17:30.115157   99837 host.go:66] Checking if "test-preload-171645" exists ...
	I0531 19:17:30.115645   99837 cli_runner.go:164] Run: docker container inspect test-preload-171645 --format={{.State.Status}}
	I0531 19:17:30.135911   99837 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32904 SSHKeyPath:/home/jenkins/minikube-integration/16569-2389/.minikube/machines/test-preload-171645/id_rsa Username:docker}
	I0531 19:17:30.152561   99837 addons.go:420] installing /etc/kubernetes/addons/storageclass.yaml
	I0531 19:17:30.152588   99837 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0531 19:17:30.152652   99837 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-171645
	I0531 19:17:30.184409   99837 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32904 SSHKeyPath:/home/jenkins/minikube-integration/16569-2389/.minikube/machines/test-preload-171645/id_rsa Username:docker}
	I0531 19:17:30.220361   99837 start.go:889] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0531 19:17:30.220433   99837 node_ready.go:35] waiting up to 6m0s for node "test-preload-171645" to be "Ready" ...
	I0531 19:17:30.223545   99837 node_ready.go:49] node "test-preload-171645" has status "Ready":"True"
	I0531 19:17:30.223573   99837 node_ready.go:38] duration metric: took 3.120417ms waiting for node "test-preload-171645" to be "Ready" ...
	I0531 19:17:30.223585   99837 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0531 19:17:30.230281   99837 pod_ready.go:78] waiting up to 6m0s for pod "coredns-6d4b75cb6d-9qms2" in "kube-system" namespace to be "Ready" ...
	I0531 19:17:30.285650   99837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0531 19:17:30.335746   99837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0531 19:17:30.564075   99837 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0531 19:17:30.566060   99837 addons.go:499] enable addons completed in 530.737278ms: enabled=[storage-provisioner default-storageclass]
	I0531 19:17:32.243863   99837 pod_ready.go:102] pod "coredns-6d4b75cb6d-9qms2" in "kube-system" namespace has status "Ready":"False"
	I0531 19:17:34.743503   99837 pod_ready.go:102] pod "coredns-6d4b75cb6d-9qms2" in "kube-system" namespace has status "Ready":"False"
	I0531 19:17:36.743557   99837 pod_ready.go:102] pod "coredns-6d4b75cb6d-9qms2" in "kube-system" namespace has status "Ready":"False"
	I0531 19:17:39.243672   99837 pod_ready.go:102] pod "coredns-6d4b75cb6d-9qms2" in "kube-system" namespace has status "Ready":"False"
	I0531 19:17:41.243851   99837 pod_ready.go:102] pod "coredns-6d4b75cb6d-9qms2" in "kube-system" namespace has status "Ready":"False"
	I0531 19:17:43.743101   99837 pod_ready.go:102] pod "coredns-6d4b75cb6d-9qms2" in "kube-system" namespace has status "Ready":"False"
	I0531 19:17:45.743468   99837 pod_ready.go:102] pod "coredns-6d4b75cb6d-9qms2" in "kube-system" namespace has status "Ready":"False"
	I0531 19:17:47.743618   99837 pod_ready.go:102] pod "coredns-6d4b75cb6d-9qms2" in "kube-system" namespace has status "Ready":"False"
	I0531 19:17:49.243764   99837 pod_ready.go:92] pod "coredns-6d4b75cb6d-9qms2" in "kube-system" namespace has status "Ready":"True"
	I0531 19:17:49.243791   99837 pod_ready.go:81] duration metric: took 19.013480415s waiting for pod "coredns-6d4b75cb6d-9qms2" in "kube-system" namespace to be "Ready" ...
	I0531 19:17:49.243803   99837 pod_ready.go:78] waiting up to 6m0s for pod "etcd-test-preload-171645" in "kube-system" namespace to be "Ready" ...
	I0531 19:17:49.249127   99837 pod_ready.go:92] pod "etcd-test-preload-171645" in "kube-system" namespace has status "Ready":"True"
	I0531 19:17:49.249151   99837 pod_ready.go:81] duration metric: took 5.341469ms waiting for pod "etcd-test-preload-171645" in "kube-system" namespace to be "Ready" ...
	I0531 19:17:49.249166   99837 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-test-preload-171645" in "kube-system" namespace to be "Ready" ...
	I0531 19:17:49.254643   99837 pod_ready.go:92] pod "kube-apiserver-test-preload-171645" in "kube-system" namespace has status "Ready":"True"
	I0531 19:17:49.254665   99837 pod_ready.go:81] duration metric: took 5.490391ms waiting for pod "kube-apiserver-test-preload-171645" in "kube-system" namespace to be "Ready" ...
	I0531 19:17:49.254680   99837 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-test-preload-171645" in "kube-system" namespace to be "Ready" ...
	I0531 19:17:49.259983   99837 pod_ready.go:92] pod "kube-controller-manager-test-preload-171645" in "kube-system" namespace has status "Ready":"True"
	I0531 19:17:49.260009   99837 pod_ready.go:81] duration metric: took 5.318791ms waiting for pod "kube-controller-manager-test-preload-171645" in "kube-system" namespace to be "Ready" ...
	I0531 19:17:49.260022   99837 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-x885r" in "kube-system" namespace to be "Ready" ...
	I0531 19:17:49.264993   99837 pod_ready.go:92] pod "kube-proxy-x885r" in "kube-system" namespace has status "Ready":"True"
	I0531 19:17:49.265015   99837 pod_ready.go:81] duration metric: took 4.985657ms waiting for pod "kube-proxy-x885r" in "kube-system" namespace to be "Ready" ...
	I0531 19:17:49.265031   99837 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-test-preload-171645" in "kube-system" namespace to be "Ready" ...
	I0531 19:17:49.641603   99837 pod_ready.go:92] pod "kube-scheduler-test-preload-171645" in "kube-system" namespace has status "Ready":"True"
	I0531 19:17:49.641629   99837 pod_ready.go:81] duration metric: took 376.589943ms waiting for pod "kube-scheduler-test-preload-171645" in "kube-system" namespace to be "Ready" ...
	I0531 19:17:49.641642   99837 pod_ready.go:38] duration metric: took 19.418047758s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0531 19:17:49.641681   99837 api_server.go:52] waiting for apiserver process to appear ...
	I0531 19:17:49.641763   99837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 19:17:49.655770   99837 api_server.go:72] duration metric: took 19.610651454s to wait for apiserver process to appear ...
	I0531 19:17:49.655796   99837 api_server.go:88] waiting for apiserver healthz status ...
	I0531 19:17:49.655812   99837 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0531 19:17:49.664561   99837 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0531 19:17:49.665498   99837 api_server.go:141] control plane version: v1.24.4
	I0531 19:17:49.665534   99837 api_server.go:131] duration metric: took 9.73084ms to wait for apiserver health ...
	I0531 19:17:49.665544   99837 system_pods.go:43] waiting for kube-system pods to appear ...
	I0531 19:17:49.845192   99837 system_pods.go:59] 8 kube-system pods found
	I0531 19:17:49.845223   99837 system_pods.go:61] "coredns-6d4b75cb6d-9qms2" [4e791c9c-e1da-441b-970f-0325882ec2f9] Running
	I0531 19:17:49.845229   99837 system_pods.go:61] "etcd-test-preload-171645" [94423385-8681-47c0-ba92-e047cde61a1d] Running
	I0531 19:17:49.845234   99837 system_pods.go:61] "kindnet-lfm8z" [3daf2b79-9d3a-4492-b881-64a63c4936e7] Running
	I0531 19:17:49.845239   99837 system_pods.go:61] "kube-apiserver-test-preload-171645" [2b399d40-786b-455a-ba8a-d94ba470f6fe] Running
	I0531 19:17:49.845330   99837 system_pods.go:61] "kube-controller-manager-test-preload-171645" [3e35ca1c-d94b-4f04-af0d-14e0aa595fc1] Running
	I0531 19:17:49.845364   99837 system_pods.go:61] "kube-proxy-x885r" [910feb26-d4d3-458f-8fe5-c7cf99b90dc0] Running
	I0531 19:17:49.845376   99837 system_pods.go:61] "kube-scheduler-test-preload-171645" [21180c7d-9e1b-4371-ba27-e7209d0770c9] Running
	I0531 19:17:49.845382   99837 system_pods.go:61] "storage-provisioner" [8099d68e-42e6-48d8-b543-2674a3a0d5e1] Running
	I0531 19:17:49.845388   99837 system_pods.go:74] duration metric: took 179.824663ms to wait for pod list to return data ...
	I0531 19:17:49.845396   99837 default_sa.go:34] waiting for default service account to be created ...
	I0531 19:17:50.041727   99837 default_sa.go:45] found service account: "default"
	I0531 19:17:50.041753   99837 default_sa.go:55] duration metric: took 196.348833ms for default service account to be created ...
	I0531 19:17:50.041764   99837 system_pods.go:116] waiting for k8s-apps to be running ...
	I0531 19:17:50.245381   99837 system_pods.go:86] 8 kube-system pods found
	I0531 19:17:50.245414   99837 system_pods.go:89] "coredns-6d4b75cb6d-9qms2" [4e791c9c-e1da-441b-970f-0325882ec2f9] Running
	I0531 19:17:50.245421   99837 system_pods.go:89] "etcd-test-preload-171645" [94423385-8681-47c0-ba92-e047cde61a1d] Running
	I0531 19:17:50.245426   99837 system_pods.go:89] "kindnet-lfm8z" [3daf2b79-9d3a-4492-b881-64a63c4936e7] Running
	I0531 19:17:50.245431   99837 system_pods.go:89] "kube-apiserver-test-preload-171645" [2b399d40-786b-455a-ba8a-d94ba470f6fe] Running
	I0531 19:17:50.245441   99837 system_pods.go:89] "kube-controller-manager-test-preload-171645" [3e35ca1c-d94b-4f04-af0d-14e0aa595fc1] Running
	I0531 19:17:50.245446   99837 system_pods.go:89] "kube-proxy-x885r" [910feb26-d4d3-458f-8fe5-c7cf99b90dc0] Running
	I0531 19:17:50.245451   99837 system_pods.go:89] "kube-scheduler-test-preload-171645" [21180c7d-9e1b-4371-ba27-e7209d0770c9] Running
	I0531 19:17:50.245462   99837 system_pods.go:89] "storage-provisioner" [8099d68e-42e6-48d8-b543-2674a3a0d5e1] Running
	I0531 19:17:50.245470   99837 system_pods.go:126] duration metric: took 203.701222ms to wait for k8s-apps to be running ...
	I0531 19:17:50.245484   99837 system_svc.go:44] waiting for kubelet service to be running ....
	I0531 19:17:50.245548   99837 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0531 19:17:50.260953   99837 system_svc.go:56] duration metric: took 15.460039ms WaitForService to wait for kubelet.
	I0531 19:17:50.260976   99837 kubeadm.go:581] duration metric: took 20.215862895s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0531 19:17:50.260995   99837 node_conditions.go:102] verifying NodePressure condition ...
	I0531 19:17:50.441546   99837 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0531 19:17:50.441579   99837 node_conditions.go:123] node cpu capacity is 2
	I0531 19:17:50.441590   99837 node_conditions.go:105] duration metric: took 180.590375ms to run NodePressure ...
	I0531 19:17:50.441599   99837 start.go:228] waiting for startup goroutines ...
	I0531 19:17:50.441606   99837 start.go:233] waiting for cluster config update ...
	I0531 19:17:50.441615   99837 start.go:242] writing updated cluster config ...
	I0531 19:17:50.441905   99837 ssh_runner.go:195] Run: rm -f paused
	I0531 19:17:50.498232   99837 start.go:573] kubectl: 1.27.2, cluster: 1.24.4 (minor skew: 3)
	I0531 19:17:50.500347   99837 out.go:177] 
	W0531 19:17:50.502051   99837 out.go:239] ! /usr/local/bin/kubectl is version 1.27.2, which may have incompatibilities with Kubernetes 1.24.4.
	I0531 19:17:50.503784   99837 out.go:177]   - Want kubectl v1.24.4? Try 'minikube kubectl -- get pods -A'
	I0531 19:17:50.505398   99837 out.go:177] * Done! kubectl is now configured to use "test-preload-171645" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* May 31 19:17:17 test-preload-171645 crio[600]: time="2023-05-31 19:17:17.856227282Z" level=info msg="Started container" PID=1378 containerID=0a9be86fb85fae4a162b83590df84ba72647fff053cf47975b2694e67f80d878 description=kube-system/kube-proxy-x885r/kube-proxy id=6b6961ab-7780-45cf-ab35-ad518a78c699 name=/runtime.v1.RuntimeService/StartContainer sandboxID=5b1f792d27a8e8beec6bb1eec628811b71726d7bbef27e5f1dcc7f4e453043c7
	May 31 19:17:47 test-preload-171645 crio[600]: time="2023-05-31 19:17:47.677540187Z" level=info msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist.temp\": CREATE"
	May 31 19:17:47 test-preload-171645 crio[600]: time="2023-05-31 19:17:47.682037101Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	May 31 19:17:47 test-preload-171645 crio[600]: time="2023-05-31 19:17:47.682079226Z" level=info msg="Updated default CNI network name to kindnet"
	May 31 19:17:47 test-preload-171645 crio[600]: time="2023-05-31 19:17:47.682096620Z" level=info msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist.temp\": WRITE"
	May 31 19:17:47 test-preload-171645 crio[600]: time="2023-05-31 19:17:47.685960022Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	May 31 19:17:47 test-preload-171645 crio[600]: time="2023-05-31 19:17:47.685996600Z" level=info msg="Updated default CNI network name to kindnet"
	May 31 19:17:47 test-preload-171645 crio[600]: time="2023-05-31 19:17:47.686013626Z" level=info msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist.temp\": RENAME"
	May 31 19:17:47 test-preload-171645 crio[600]: time="2023-05-31 19:17:47.690949428Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	May 31 19:17:47 test-preload-171645 crio[600]: time="2023-05-31 19:17:47.690985432Z" level=info msg="Updated default CNI network name to kindnet"
	May 31 19:17:47 test-preload-171645 crio[600]: time="2023-05-31 19:17:47.691004821Z" level=info msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist\": CREATE"
	May 31 19:17:47 test-preload-171645 crio[600]: time="2023-05-31 19:17:47.694577263Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	May 31 19:17:47 test-preload-171645 crio[600]: time="2023-05-31 19:17:47.694611699Z" level=info msg="Updated default CNI network name to kindnet"
	May 31 19:17:47 test-preload-171645 conmon[1265]: conmon 9e7112c6491c256b1bf4 <ninfo>: container 1284 exited with status 1
	May 31 19:17:48 test-preload-171645 crio[600]: time="2023-05-31 19:17:48.671138919Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=a3efde06-73d0-4a18-abce-ffa7b838972d name=/runtime.v1.ImageService/ImageStatus
	May 31 19:17:48 test-preload-171645 crio[600]: time="2023-05-31 19:17:48.671357420Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51,RepoTags:[gcr.io/k8s-minikube/storage-provisioner:v5],RepoDigests:[gcr.io/k8s-minikube/storage-provisioner@sha256:373d9ff3be95eeecb8d14e5f1ad528b612dbdd990a793b51c5842b450bcce938],Size_:29035622,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=a3efde06-73d0-4a18-abce-ffa7b838972d name=/runtime.v1.ImageService/ImageStatus
	May 31 19:17:48 test-preload-171645 crio[600]: time="2023-05-31 19:17:48.672311989Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=0096f348-5aec-4c2f-beb9-891fa34c28cb name=/runtime.v1.ImageService/ImageStatus
	May 31 19:17:48 test-preload-171645 crio[600]: time="2023-05-31 19:17:48.672532147Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51,RepoTags:[gcr.io/k8s-minikube/storage-provisioner:v5],RepoDigests:[gcr.io/k8s-minikube/storage-provisioner@sha256:373d9ff3be95eeecb8d14e5f1ad528b612dbdd990a793b51c5842b450bcce938],Size_:29035622,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=0096f348-5aec-4c2f-beb9-891fa34c28cb name=/runtime.v1.ImageService/ImageStatus
	May 31 19:17:48 test-preload-171645 crio[600]: time="2023-05-31 19:17:48.673209885Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=8fb62d66-d859-4e19-b728-61dfc8f9ffe5 name=/runtime.v1.RuntimeService/CreateContainer
	May 31 19:17:48 test-preload-171645 crio[600]: time="2023-05-31 19:17:48.673307205Z" level=warning msg="Allowed annotations are specified for workload []"
	May 31 19:17:48 test-preload-171645 crio[600]: time="2023-05-31 19:17:48.687239790Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/7093fa476000baabdfcb58d95c90430760208ff4d0e18ebf43998566ff512fad/merged/etc/passwd: no such file or directory"
	May 31 19:17:48 test-preload-171645 crio[600]: time="2023-05-31 19:17:48.687281193Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/7093fa476000baabdfcb58d95c90430760208ff4d0e18ebf43998566ff512fad/merged/etc/group: no such file or directory"
	May 31 19:17:48 test-preload-171645 crio[600]: time="2023-05-31 19:17:48.759173490Z" level=info msg="Created container 4fcec21e533f15284f9bb46bafbfeb2742e709101c3c073eae657101b7a91fcb: kube-system/storage-provisioner/storage-provisioner" id=8fb62d66-d859-4e19-b728-61dfc8f9ffe5 name=/runtime.v1.RuntimeService/CreateContainer
	May 31 19:17:48 test-preload-171645 crio[600]: time="2023-05-31 19:17:48.759710913Z" level=info msg="Starting container: 4fcec21e533f15284f9bb46bafbfeb2742e709101c3c073eae657101b7a91fcb" id=a90fe15c-57e3-4b8f-953f-805f438bc99b name=/runtime.v1.RuntimeService/StartContainer
	May 31 19:17:48 test-preload-171645 crio[600]: time="2023-05-31 19:17:48.772914585Z" level=info msg="Started container" PID=1636 containerID=4fcec21e533f15284f9bb46bafbfeb2742e709101c3c073eae657101b7a91fcb description=kube-system/storage-provisioner/storage-provisioner id=a90fe15c-57e3-4b8f-953f-805f438bc99b name=/runtime.v1.RuntimeService/StartContainer sandboxID=0532a6ffe2d1035a07704289137d23b1dfd700812e605187ddde4b73a590bd7e
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	4fcec21e533f1       66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51   3 seconds ago       Running             storage-provisioner       2                   0532a6ffe2d10       storage-provisioner
	0a9be86fb85fa       bd8cc6d58247078a865774b7f516f8afc3ac8cd080fd49650ca30ef2fbc6ebd1   34 seconds ago      Running             kube-proxy                1                   5b1f792d27a8e       kube-proxy-x885r
	2f4a80a42761e       edaa71f2aee883484133da046954ad70fd6bf1fa42e5aec3f7dae199c626299c   34 seconds ago      Running             coredns                   1                   2c15f7d307ff9       coredns-6d4b75cb6d-9qms2
	9e7112c6491c2       66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51   34 seconds ago      Exited              storage-provisioner       1                   0532a6ffe2d10       storage-provisioner
	af4aa273ba317       b18bf71b941bae2e12db1c07e567ad14e4febbc778310a0fc64487f1ac877d79   34 seconds ago      Running             kindnet-cni               1                   ce95fc03daf60       kindnet-lfm8z
	7ade59fcee4ee       a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a   41 seconds ago      Running             etcd                      1                   77ddd7a2e2c30       etcd-test-preload-171645
	5ba6b932a18ad       3767741e7fba72f328a8500a18ef34481343eb78697e31ae5bf3e390a28317ae   41 seconds ago      Running             kube-apiserver            1                   567e518a98d44       kube-apiserver-test-preload-171645
	a6bd1edf7d378       81a4a8a4ac639bdd7e118359417a80cab1a0d0e4737eb735714cf7f8b15dc0c7   41 seconds ago      Running             kube-controller-manager   1                   d92eefea4055e       kube-controller-manager-test-preload-171645
	852bb3872c536       5753e4610b3ec0ac100c3535b8d8a7507b3d031148e168c2c3c4b0f389976074   41 seconds ago      Running             kube-scheduler            1                   8d31170489fc3       kube-scheduler-test-preload-171645
	
	* 
	* ==> coredns [2f4a80a42761e2b720aac2e314286eff5f83063c11051afc30c4949de993a12d] <==
	* [INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = c452237b08d4ce46c54c803341046308
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[INFO] 127.0.0.1:59545 - 59154 "HINFO IN 4753441532814147917.1183612911270125046. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.012682274s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	* 
	* ==> describe nodes <==
	* Name:               test-preload-171645
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=test-preload-171645
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7022875d4a054c2d518e5e5a7b9d500799d50140
	                    minikube.k8s.io/name=test-preload-171645
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_05_31T19_16_09_0700
	                    minikube.k8s.io/version=v1.30.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 31 May 2023 19:16:04 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  test-preload-171645
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 31 May 2023 19:17:46 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 31 May 2023 19:17:15 +0000   Wed, 31 May 2023 19:15:59 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 31 May 2023 19:17:15 +0000   Wed, 31 May 2023 19:15:59 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 31 May 2023 19:17:15 +0000   Wed, 31 May 2023 19:15:59 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 31 May 2023 19:17:15 +0000   Wed, 31 May 2023 19:16:28 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.67.2
	  Hostname:    test-preload-171645
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022624Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022624Ki
	  pods:               110
	System Info:
	  Machine ID:                 7cc03f1f7cce46a8986517109cee4853
	  System UUID:                8d3aa90e-793e-459d-81d9-88314d95436b
	  Boot ID:                    35429d0f-2ece-432d-a992-d9f8cda99d9c
	  Kernel Version:             5.15.0-1037-aws
	  OS Image:                   Ubuntu 22.04.2 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.5
	  Kubelet Version:            v1.24.4
	  Kube-Proxy Version:         v1.24.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                           CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                           ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-9qms2                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     91s
	  kube-system                 etcd-test-preload-171645                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         104s
	  kube-system                 kindnet-lfm8z                                  100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      91s
	  kube-system                 kube-apiserver-test-preload-171645             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         107s
	  kube-system                 kube-controller-manager-test-preload-171645    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         104s
	  kube-system                 kube-proxy-x885r                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         91s
	  kube-system                 kube-scheduler-test-preload-171645             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         104s
	  kube-system                 storage-provisioner                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         90s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%!)(MISSING)  100m (5%!)(MISSING)
	  memory             220Mi (2%!)(MISSING)  220Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 34s                  kube-proxy       
	  Normal  Starting                 89s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  115s (x5 over 115s)  kubelet          Node test-preload-171645 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    115s (x5 over 115s)  kubelet          Node test-preload-171645 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     115s (x5 over 115s)  kubelet          Node test-preload-171645 status is now: NodeHasSufficientPID
	  Normal  Starting                 105s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  105s                 kubelet          Node test-preload-171645 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    105s                 kubelet          Node test-preload-171645 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     105s                 kubelet          Node test-preload-171645 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           92s                  node-controller  Node test-preload-171645 event: Registered Node test-preload-171645 in Controller
	  Normal  NodeReady                84s                  kubelet          Node test-preload-171645 status is now: NodeReady
	  Normal  Starting                 43s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  43s (x8 over 43s)    kubelet          Node test-preload-171645 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    43s (x8 over 43s)    kubelet          Node test-preload-171645 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     43s (x8 over 43s)    kubelet          Node test-preload-171645 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           24s                  node-controller  Node test-preload-171645 event: Registered Node test-preload-171645 in Controller
	
	* 
	* ==> dmesg <==
	* [  +0.000741] FS-Cache: N-cookie c=0000000c [p=00000003 fl=2 nc=0 na=1]
	[  +0.000954] FS-Cache: N-cookie d=0000000077ffde6d{9p.inode} n=0000000074ec7799
	[  +0.001241] FS-Cache: N-key=[8] '915b3b0000000000'
	[  +0.003042] FS-Cache: Duplicate cookie detected
	[  +0.000777] FS-Cache: O-cookie c=00000006 [p=00000003 fl=226 nc=0 na=1]
	[  +0.000979] FS-Cache: O-cookie d=0000000077ffde6d{9p.inode} n=0000000031e1563a
	[  +0.001057] FS-Cache: O-key=[8] '915b3b0000000000'
	[  +0.000743] FS-Cache: N-cookie c=0000000d [p=00000003 fl=2 nc=0 na=1]
	[  +0.000985] FS-Cache: N-cookie d=0000000077ffde6d{9p.inode} n=000000007278ef73
	[  +0.001110] FS-Cache: N-key=[8] '915b3b0000000000'
	[  +2.905928] FS-Cache: Duplicate cookie detected
	[  +0.000862] FS-Cache: O-cookie c=00000004 [p=00000003 fl=226 nc=0 na=1]
	[  +0.001154] FS-Cache: O-cookie d=0000000077ffde6d{9p.inode} n=00000000ad00c953
	[  +0.001219] FS-Cache: O-key=[8] '905b3b0000000000'
	[  +0.000792] FS-Cache: N-cookie c=0000000f [p=00000003 fl=2 nc=0 na=1]
	[  +0.001108] FS-Cache: N-cookie d=0000000077ffde6d{9p.inode} n=00000000be9b4fe0
	[  +0.001229] FS-Cache: N-key=[8] '905b3b0000000000'
	[  +0.280333] FS-Cache: Duplicate cookie detected
	[  +0.000717] FS-Cache: O-cookie c=00000009 [p=00000003 fl=226 nc=0 na=1]
	[  +0.000979] FS-Cache: O-cookie d=0000000077ffde6d{9p.inode} n=000000003fd4f91a
	[  +0.001109] FS-Cache: O-key=[8] '985b3b0000000000'
	[  +0.000717] FS-Cache: N-cookie c=00000010 [p=00000003 fl=2 nc=0 na=1]
	[  +0.000957] FS-Cache: N-cookie d=0000000077ffde6d{9p.inode} n=0000000074ec7799
	[  +0.001067] FS-Cache: N-key=[8] '985b3b0000000000'
	[  +9.760834] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	
	* 
	* ==> etcd [7ade59fcee4eee005007d4151cf6a879024b4d544fd5a29004a1923c261f7279] <==
	* {"level":"info","ts":"2023-05-31T19:17:10.499Z","caller":"etcdserver/server.go:851","msg":"starting etcd server","local-member-id":"8688e899f7831fc7","local-server-version":"3.5.3","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2023-05-31T19:17:10.506Z","caller":"etcdserver/server.go:752","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2023-05-31T19:17:10.507Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 switched to configuration voters=(9694253945895198663)"}
	{"level":"info","ts":"2023-05-31T19:17:10.507Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"9d8fdeb88b6def78","local-member-id":"8688e899f7831fc7","added-peer-id":"8688e899f7831fc7","added-peer-peer-urls":["https://192.168.67.2:2380"]}
	{"level":"info","ts":"2023-05-31T19:17:10.507Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"9d8fdeb88b6def78","local-member-id":"8688e899f7831fc7","cluster-version":"3.5"}
	{"level":"info","ts":"2023-05-31T19:17:10.507Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-05-31T19:17:10.514Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2023-05-31T19:17:10.514Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2023-05-31T19:17:10.514Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-05-31T19:17:10.515Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"8688e899f7831fc7","initial-advertise-peer-urls":["https://192.168.67.2:2380"],"listen-peer-urls":["https://192.168.67.2:2380"],"advertise-client-urls":["https://192.168.67.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.67.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-05-31T19:17:10.515Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-05-31T19:17:12.066Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 is starting a new election at term 2"}
	{"level":"info","ts":"2023-05-31T19:17:12.066Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became pre-candidate at term 2"}
	{"level":"info","ts":"2023-05-31T19:17:12.066Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgPreVoteResp from 8688e899f7831fc7 at term 2"}
	{"level":"info","ts":"2023-05-31T19:17:12.067Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became candidate at term 3"}
	{"level":"info","ts":"2023-05-31T19:17:12.067Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgVoteResp from 8688e899f7831fc7 at term 3"}
	{"level":"info","ts":"2023-05-31T19:17:12.067Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became leader at term 3"}
	{"level":"info","ts":"2023-05-31T19:17:12.067Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 8688e899f7831fc7 elected leader 8688e899f7831fc7 at term 3"}
	{"level":"info","ts":"2023-05-31T19:17:12.070Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"8688e899f7831fc7","local-member-attributes":"{Name:test-preload-171645 ClientURLs:[https://192.168.67.2:2379]}","request-path":"/0/members/8688e899f7831fc7/attributes","cluster-id":"9d8fdeb88b6def78","publish-timeout":"7s"}
	{"level":"info","ts":"2023-05-31T19:17:12.070Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-05-31T19:17:12.074Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-05-31T19:17:12.074Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-05-31T19:17:12.076Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.67.2:2379"}
	{"level":"info","ts":"2023-05-31T19:17:12.076Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-05-31T19:17:12.076Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	* 
	* ==> kernel <==
	*  19:17:52 up  1:00,  0 users,  load average: 1.60, 1.54, 1.30
	Linux test-preload-171645 5.15.0-1037-aws #41~20.04.1-Ubuntu SMP Mon May 22 18:20:20 UTC 2023 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.2 LTS"
	
	* 
	* ==> kindnet [af4aa273ba31774d60599fb266cb67427e68be252a27563bdb4ffabb0a19810f] <==
	* I0531 19:17:17.329801       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0531 19:17:17.330054       1 main.go:107] hostIP = 192.168.67.2
	podIP = 192.168.67.2
	I0531 19:17:17.330180       1 main.go:116] setting mtu 1500 for CNI 
	I0531 19:17:17.330225       1 main.go:146] kindnetd IP family: "ipv4"
	I0531 19:17:17.330264       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0531 19:17:47.658927       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: i/o timeout
	I0531 19:17:47.677279       1 main.go:223] Handling node with IPs: map[192.168.67.2:{}]
	I0531 19:17:47.677305       1 main.go:227] handling current node
	
	* 
	* ==> kube-apiserver [5ba6b932a18ad1837299984fc92e816124dc12d9be06f5b18864751923586992] <==
	* I0531 19:17:15.552918       1 establishing_controller.go:76] Starting EstablishingController
	I0531 19:17:15.553006       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I0531 19:17:15.553046       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0531 19:17:15.553087       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0531 19:17:15.553140       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0531 19:17:15.553171       1 shared_informer.go:255] Waiting for caches to sync for crd-autoregister
	I0531 19:17:15.714452       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I0531 19:17:15.714574       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0531 19:17:15.722852       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0531 19:17:15.733069       1 cache.go:39] Caches are synced for autoregister controller
	I0531 19:17:15.733553       1 shared_informer.go:262] Caches are synced for crd-autoregister
	I0531 19:17:15.733984       1 shared_informer.go:262] Caches are synced for node_authorizer
	I0531 19:17:15.748680       1 apf_controller.go:322] Running API Priority and Fairness config worker
	E0531 19:17:15.761842       1 controller.go:169] Error removing old endpoints from kubernetes service: no master IPs were listed in storage, refusing to erase all endpoints for the kubernetes service
	I0531 19:17:15.767020       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0531 19:17:16.128365       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0531 19:17:16.518877       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0531 19:17:17.992290       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	I0531 19:17:18.229036       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0531 19:17:18.354200       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0531 19:17:18.366379       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0531 19:17:18.419395       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0531 19:17:18.425999       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0531 19:17:28.911621       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0531 19:17:28.960776       1 controller.go:611] quota admission added evaluator for: endpoints
	
	* 
	* ==> kube-controller-manager [a6bd1edf7d37838f6720c216a2026a90f66bdb6e390a774253ba2ab0869fdd9a] <==
	* I0531 19:17:28.658264       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0531 19:17:28.658807       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kubelet-serving
	I0531 19:17:28.658884       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kubelet-client
	I0531 19:17:28.658369       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-legacy-unknown
	I0531 19:17:28.670910       1 shared_informer.go:262] Caches are synced for crt configmap
	I0531 19:17:28.671893       1 shared_informer.go:262] Caches are synced for endpoint
	I0531 19:17:28.671933       1 shared_informer.go:262] Caches are synced for PVC protection
	I0531 19:17:28.671956       1 shared_informer.go:262] Caches are synced for endpoint_slice
	I0531 19:17:28.674859       1 shared_informer.go:262] Caches are synced for GC
	I0531 19:17:28.678792       1 shared_informer.go:262] Caches are synced for taint
	I0531 19:17:28.678913       1 node_lifecycle_controller.go:1399] Initializing eviction metric for zone: 
	W0531 19:17:28.678981       1 node_lifecycle_controller.go:1014] Missing timestamp for Node test-preload-171645. Assuming now as a timestamp.
	I0531 19:17:28.679033       1 node_lifecycle_controller.go:1215] Controller detected that zone  is now in state Normal.
	I0531 19:17:28.679297       1 taint_manager.go:187] "Starting NoExecuteTaintManager"
	I0531 19:17:28.679476       1 event.go:294] "Event occurred" object="test-preload-171645" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node test-preload-171645 event: Registered Node test-preload-171645 in Controller"
	I0531 19:17:28.682893       1 shared_informer.go:262] Caches are synced for endpoint_slice_mirroring
	I0531 19:17:28.683004       1 shared_informer.go:262] Caches are synced for certificate-csrapproving
	I0531 19:17:28.757458       1 shared_informer.go:262] Caches are synced for HPA
	I0531 19:17:28.816771       1 shared_informer.go:262] Caches are synced for resource quota
	I0531 19:17:28.836087       1 shared_informer.go:262] Caches are synced for service account
	I0531 19:17:28.853137       1 shared_informer.go:262] Caches are synced for namespace
	I0531 19:17:28.860843       1 shared_informer.go:262] Caches are synced for resource quota
	I0531 19:17:29.303105       1 shared_informer.go:262] Caches are synced for garbage collector
	I0531 19:17:29.342829       1 shared_informer.go:262] Caches are synced for garbage collector
	I0531 19:17:29.342860       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	
	* 
	* ==> kube-proxy [0a9be86fb85fae4a162b83590df84ba72647fff053cf47975b2694e67f80d878] <==
	* I0531 19:17:17.934433       1 node.go:163] Successfully retrieved node IP: 192.168.67.2
	I0531 19:17:17.934720       1 server_others.go:138] "Detected node IP" address="192.168.67.2"
	I0531 19:17:17.934819       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0531 19:17:17.984942       1 server_others.go:206] "Using iptables Proxier"
	I0531 19:17:17.985059       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0531 19:17:17.985094       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I0531 19:17:17.985149       1 server_others.go:501] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0531 19:17:17.985251       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0531 19:17:17.985421       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0531 19:17:17.985940       1 server.go:661] "Version info" version="v1.24.4"
	I0531 19:17:17.985997       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0531 19:17:17.986723       1 config.go:317] "Starting service config controller"
	I0531 19:17:17.986836       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0531 19:17:17.986901       1 config.go:226] "Starting endpoint slice config controller"
	I0531 19:17:17.986963       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0531 19:17:17.988853       1 config.go:444] "Starting node config controller"
	I0531 19:17:17.989740       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0531 19:17:18.087363       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I0531 19:17:18.087487       1 shared_informer.go:262] Caches are synced for service config
	I0531 19:17:18.089874       1 shared_informer.go:262] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [852bb3872c53694f68bcba4352747549ac91f82895ffabc17a7720838e9bc56f] <==
	* I0531 19:17:11.921753       1 serving.go:348] Generated self-signed cert in-memory
	W0531 19:17:15.565971       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0531 19:17:15.566092       1 authentication.go:346] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0531 19:17:15.566127       1 authentication.go:347] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0531 19:17:15.566169       1 authentication.go:348] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0531 19:17:15.762205       1 server.go:147] "Starting Kubernetes Scheduler" version="v1.24.4"
	I0531 19:17:15.762231       1 server.go:149] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0531 19:17:15.763767       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0531 19:17:15.763793       1 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0531 19:17:15.764460       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0531 19:17:15.764507       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0531 19:17:15.864179       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* May 31 19:17:15 test-preload-171645 kubelet[908]: I0531 19:17:15.799437     908 kubelet_node_status.go:108] "Node was previously registered" node="test-preload-171645"
	May 31 19:17:15 test-preload-171645 kubelet[908]: I0531 19:17:15.799554     908 kubelet_node_status.go:73] "Successfully registered node" node="test-preload-171645"
	May 31 19:17:15 test-preload-171645 kubelet[908]: I0531 19:17:15.809068     908 kuberuntime_manager.go:1095] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	May 31 19:17:15 test-preload-171645 kubelet[908]: I0531 19:17:15.809692     908 kubelet_network.go:60] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	May 31 19:17:16 test-preload-171645 kubelet[908]: I0531 19:17:16.491609     908 apiserver.go:52] "Watching apiserver"
	May 31 19:17:16 test-preload-171645 kubelet[908]: I0531 19:17:16.495268     908 topology_manager.go:200] "Topology Admit Handler"
	May 31 19:17:16 test-preload-171645 kubelet[908]: I0531 19:17:16.495371     908 topology_manager.go:200] "Topology Admit Handler"
	May 31 19:17:16 test-preload-171645 kubelet[908]: I0531 19:17:16.495413     908 topology_manager.go:200] "Topology Admit Handler"
	May 31 19:17:16 test-preload-171645 kubelet[908]: I0531 19:17:16.495459     908 topology_manager.go:200] "Topology Admit Handler"
	May 31 19:17:16 test-preload-171645 kubelet[908]: I0531 19:17:16.639512     908 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/910feb26-d4d3-458f-8fe5-c7cf99b90dc0-lib-modules\") pod \"kube-proxy-x885r\" (UID: \"910feb26-d4d3-458f-8fe5-c7cf99b90dc0\") " pod="kube-system/kube-proxy-x885r"
	May 31 19:17:16 test-preload-171645 kubelet[908]: I0531 19:17:16.639563     908 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/910feb26-d4d3-458f-8fe5-c7cf99b90dc0-xtables-lock\") pod \"kube-proxy-x885r\" (UID: \"910feb26-d4d3-458f-8fe5-c7cf99b90dc0\") " pod="kube-system/kube-proxy-x885r"
	May 31 19:17:16 test-preload-171645 kubelet[908]: I0531 19:17:16.639594     908 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qvq4w\" (UniqueName: \"kubernetes.io/projected/910feb26-d4d3-458f-8fe5-c7cf99b90dc0-kube-api-access-qvq4w\") pod \"kube-proxy-x885r\" (UID: \"910feb26-d4d3-458f-8fe5-c7cf99b90dc0\") " pod="kube-system/kube-proxy-x885r"
	May 31 19:17:16 test-preload-171645 kubelet[908]: I0531 19:17:16.639623     908 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lvjsb\" (UniqueName: \"kubernetes.io/projected/4e791c9c-e1da-441b-970f-0325882ec2f9-kube-api-access-lvjsb\") pod \"coredns-6d4b75cb6d-9qms2\" (UID: \"4e791c9c-e1da-441b-970f-0325882ec2f9\") " pod="kube-system/coredns-6d4b75cb6d-9qms2"
	May 31 19:17:16 test-preload-171645 kubelet[908]: I0531 19:17:16.639651     908 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/3daf2b79-9d3a-4492-b881-64a63c4936e7-cni-cfg\") pod \"kindnet-lfm8z\" (UID: \"3daf2b79-9d3a-4492-b881-64a63c4936e7\") " pod="kube-system/kindnet-lfm8z"
	May 31 19:17:16 test-preload-171645 kubelet[908]: I0531 19:17:16.639693     908 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-94j86\" (UniqueName: \"kubernetes.io/projected/3daf2b79-9d3a-4492-b881-64a63c4936e7-kube-api-access-94j86\") pod \"kindnet-lfm8z\" (UID: \"3daf2b79-9d3a-4492-b881-64a63c4936e7\") " pod="kube-system/kindnet-lfm8z"
	May 31 19:17:16 test-preload-171645 kubelet[908]: I0531 19:17:16.639718     908 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3daf2b79-9d3a-4492-b881-64a63c4936e7-xtables-lock\") pod \"kindnet-lfm8z\" (UID: \"3daf2b79-9d3a-4492-b881-64a63c4936e7\") " pod="kube-system/kindnet-lfm8z"
	May 31 19:17:16 test-preload-171645 kubelet[908]: I0531 19:17:16.639742     908 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/910feb26-d4d3-458f-8fe5-c7cf99b90dc0-kube-proxy\") pod \"kube-proxy-x885r\" (UID: \"910feb26-d4d3-458f-8fe5-c7cf99b90dc0\") " pod="kube-system/kube-proxy-x885r"
	May 31 19:17:16 test-preload-171645 kubelet[908]: I0531 19:17:16.639765     908 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4e791c9c-e1da-441b-970f-0325882ec2f9-config-volume\") pod \"coredns-6d4b75cb6d-9qms2\" (UID: \"4e791c9c-e1da-441b-970f-0325882ec2f9\") " pod="kube-system/coredns-6d4b75cb6d-9qms2"
	May 31 19:17:16 test-preload-171645 kubelet[908]: I0531 19:17:16.639790     908 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3daf2b79-9d3a-4492-b881-64a63c4936e7-lib-modules\") pod \"kindnet-lfm8z\" (UID: \"3daf2b79-9d3a-4492-b881-64a63c4936e7\") " pod="kube-system/kindnet-lfm8z"
	May 31 19:17:16 test-preload-171645 kubelet[908]: I0531 19:17:16.639824     908 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/8099d68e-42e6-48d8-b543-2674a3a0d5e1-tmp\") pod \"storage-provisioner\" (UID: \"8099d68e-42e6-48d8-b543-2674a3a0d5e1\") " pod="kube-system/storage-provisioner"
	May 31 19:17:16 test-preload-171645 kubelet[908]: I0531 19:17:16.639849     908 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bdctz\" (UniqueName: \"kubernetes.io/projected/8099d68e-42e6-48d8-b543-2674a3a0d5e1-kube-api-access-bdctz\") pod \"storage-provisioner\" (UID: \"8099d68e-42e6-48d8-b543-2674a3a0d5e1\") " pod="kube-system/storage-provisioner"
	May 31 19:17:16 test-preload-171645 kubelet[908]: I0531 19:17:16.639862     908 reconciler.go:159] "Reconciler: start to sync state"
	May 31 19:17:17 test-preload-171645 kubelet[908]: W0531 19:17:17.441172     908 manager.go:1176] Failed to process watch event {EventType:0 Name:/docker/e5f2892ff524db7d65c484d67edc34bdf09192ed243f8ed7b2d47a248355b354/crio/crio-0532a6ffe2d1035a07704289137d23b1dfd700812e605187ddde4b73a590bd7e WatchSource:0}: Error finding container 0532a6ffe2d1035a07704289137d23b1dfd700812e605187ddde4b73a590bd7e: Status 404 returned error &{%!!(MISSING)s(*http.body=&{0x40012a9e78 <nil> <nil> false false {0 0} false false false <nil>}) {%!!(MISSING)s(int32=0) %!!(MISSING)s(uint32=0)} %!!(MISSING)s(bool=false) <nil> %!!(MISSING)s(func(error) error=0x7e6400) %!!(MISSING)s(func() error=0x7e6500)}
	May 31 19:17:17 test-preload-171645 kubelet[908]: W0531 19:17:17.471253     908 manager.go:1176] Failed to process watch event {EventType:0 Name:/docker/e5f2892ff524db7d65c484d67edc34bdf09192ed243f8ed7b2d47a248355b354/crio/crio-2c15f7d307ff99b44f522e1dae8497f320db1fe6c1324bd3cee723d8be484e00 WatchSource:0}: Error finding container 2c15f7d307ff99b44f522e1dae8497f320db1fe6c1324bd3cee723d8be484e00: Status 404 returned error &{%!!(MISSING)s(*http.body=&{0x400157abd0 <nil> <nil> false false {0 0} false false false <nil>}) {%!!(MISSING)s(int32=0) %!!(MISSING)s(uint32=0)} %!!(MISSING)s(bool=false) <nil> %!!(MISSING)s(func(error) error=0x7e6400) %!!(MISSING)s(func() error=0x7e6500)}
	May 31 19:17:48 test-preload-171645 kubelet[908]: I0531 19:17:48.670674     908 scope.go:110] "RemoveContainer" containerID="9e7112c6491c256b1bf487826e88db556034de5b88fa66c4adf63c7967d05fd6"
	
	* 
	* ==> storage-provisioner [4fcec21e533f15284f9bb46bafbfeb2742e709101c3c073eae657101b7a91fcb] <==
	* I0531 19:17:48.793038       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0531 19:17:48.807731       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0531 19:17:48.807827       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	
	* 
	* ==> storage-provisioner [9e7112c6491c256b1bf487826e88db556034de5b88fa66c4adf63c7967d05fd6] <==
	* I0531 19:17:17.852807       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0531 19:17:47.855158       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p test-preload-171645 -n test-preload-171645
helpers_test.go:261: (dbg) Run:  kubectl --context test-preload-171645 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPreload FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "test-preload-171645" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-171645
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-171645: (2.367767713s)
--- FAIL: TestPreload (172.17s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (68.67s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:132: (dbg) Run:  /tmp/minikube-v1.17.0.3445682957.exe start -p running-upgrade-862679 --memory=2200 --vm-driver=docker  --container-runtime=crio
E0531 19:25:08.303904    7804 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/addons-748280/client.crt: no such file or directory
E0531 19:25:18.521078    7804 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/functional-747104/client.crt: no such file or directory
version_upgrade_test.go:132: (dbg) Done: /tmp/minikube-v1.17.0.3445682957.exe start -p running-upgrade-862679 --memory=2200 --vm-driver=docker  --container-runtime=crio: (1m0.616464411s)
version_upgrade_test.go:142: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-862679 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:142: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p running-upgrade-862679 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 90 (3.579029365s)

                                                
                                                
-- stdout --
	* [running-upgrade-862679] minikube v1.30.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=16569
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/16569-2389/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/16569-2389/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.27.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.27.2
	* Using the docker driver based on existing profile
	* Starting control plane node running-upgrade-862679 in cluster running-upgrade-862679
	* Pulling base image ...
	* Updating the running docker "running-upgrade-862679" container ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0531 19:25:19.180023  129550 out.go:296] Setting OutFile to fd 1 ...
	I0531 19:25:19.180215  129550 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0531 19:25:19.180243  129550 out.go:309] Setting ErrFile to fd 2...
	I0531 19:25:19.180263  129550 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0531 19:25:19.180436  129550 root.go:336] Updating PATH: /home/jenkins/minikube-integration/16569-2389/.minikube/bin
	I0531 19:25:19.181375  129550 out.go:303] Setting JSON to false
	I0531 19:25:19.182480  129550 start.go:127] hostinfo: {"hostname":"ip-172-31-21-244","uptime":4065,"bootTime":1685557055,"procs":344,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1037-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0531 19:25:19.182580  129550 start.go:137] virtualization:  
	I0531 19:25:19.185553  129550 out.go:177] * [running-upgrade-862679] minikube v1.30.1 on Ubuntu 20.04 (arm64)
	I0531 19:25:19.188420  129550 out.go:177]   - MINIKUBE_LOCATION=16569
	I0531 19:25:19.188608  129550 preload.go:306] deleting older generation preload /home/jenkins/minikube-integration/16569-2389/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v8-v1.20.2-cri-o-overlay-arm64.tar.lz4
	I0531 19:25:19.188651  129550 notify.go:220] Checking for updates...
	I0531 19:25:19.194144  129550 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0531 19:25:19.196295  129550 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/16569-2389/kubeconfig
	I0531 19:25:19.198476  129550 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/16569-2389/.minikube
	I0531 19:25:19.200398  129550 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0531 19:25:19.203140  129550 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0531 19:25:19.205741  129550 config.go:182] Loaded profile config "running-upgrade-862679": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.20.2
	I0531 19:25:19.209430  129550 out.go:177] * Kubernetes 1.27.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.27.2
	I0531 19:25:19.211534  129550 driver.go:375] Setting default libvirt URI to qemu:///system
	I0531 19:25:19.266988  129550 docker.go:121] docker version: linux-24.0.2:Docker Engine - Community
	I0531 19:25:19.267109  129550 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0531 19:25:19.354876  129550 preload.go:306] deleting older generation preload /home/jenkins/minikube-integration/16569-2389/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v8-v1.20.2-cri-o-overlay-arm64.tar.lz4.checksum
	I0531 19:25:19.395032  129550 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:6 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:43 OomKillDisable:true NGoroutines:54 SystemTime:2023-05-31 19:25:19.37071109 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1037-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archit
ecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215166976 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> Se
rverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.5] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.18.1]] Warnings:<nil>}}
	I0531 19:25:19.395151  129550 docker.go:294] overlay module found
	I0531 19:25:19.399007  129550 out.go:177] * Using the docker driver based on existing profile
	I0531 19:25:19.401181  129550 start.go:297] selected driver: docker
	I0531 19:25:19.401203  129550 start.go:875] validating driver "docker" against &{Name:running-upgrade-862679 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:running-upgrade-862679 Namespace:default APIServerName:minikubeCA APIServer
Names:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.70.216 Port:8443 KubernetesVersion:v1.20.2 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath
: StaticIP:}
	I0531 19:25:19.401307  129550 start.go:886] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0531 19:25:19.401928  129550 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0531 19:25:19.522524  129550 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:6 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:39 OomKillDisable:true NGoroutines:54 SystemTime:2023-05-31 19:25:19.512415734 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1037-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215166976 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.5] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.18.1]] Warnings:<nil>}}
	I0531 19:25:19.522848  129550 cni.go:84] Creating CNI manager for ""
	I0531 19:25:19.522866  129550 cni.go:142] "docker" driver + "crio" runtime found, recommending kindnet
	I0531 19:25:19.522877  129550 start_flags.go:319] config:
	{Name:running-upgrade-862679 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:running-upgrade-862679 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket
: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.70.216 Port:8443 KubernetesVersion:v1.20.2 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0531 19:25:19.525486  129550 out.go:177] * Starting control plane node running-upgrade-862679 in cluster running-upgrade-862679
	I0531 19:25:19.526974  129550 cache.go:122] Beginning downloading kic base image for docker with crio
	I0531 19:25:19.529197  129550 out.go:177] * Pulling base image ...
	I0531 19:25:19.531279  129550 preload.go:132] Checking if preload exists for k8s version v1.20.2 and runtime crio
	I0531 19:25:19.531436  129550 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e in local docker daemon
	I0531 19:25:19.551938  129550 image.go:83] Found gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e in local docker daemon, skipping pull
	I0531 19:25:19.551964  129550 cache.go:145] gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e exists in daemon, skipping load
	W0531 19:25:19.602941  129550 preload.go:115] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.2/preloaded-images-k8s-v18-v1.20.2-cri-o-overlay-arm64.tar.lz4 status code: 404
	I0531 19:25:19.603092  129550 profile.go:148] Saving config to /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/running-upgrade-862679/config.json ...
	I0531 19:25:19.603347  129550 cache.go:195] Successfully downloaded all kic artifacts
	I0531 19:25:19.603391  129550 start.go:364] acquiring machines lock for running-upgrade-862679: {Name:mk7a75e607d3d0a6358ae6fe39b8bd7d3c160775 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0531 19:25:19.603446  129550 start.go:368] acquired machines lock for "running-upgrade-862679" in 31.122µs
	I0531 19:25:19.603465  129550 start.go:96] Skipping create...Using existing machine configuration
	I0531 19:25:19.603471  129550 fix.go:55] fixHost starting: 
	I0531 19:25:19.603728  129550 cli_runner.go:164] Run: docker container inspect running-upgrade-862679 --format={{.State.Status}}
	I0531 19:25:19.603972  129550 cache.go:107] acquiring lock: {Name:mk89f9e6ee8c851ef9ed99cf6ebe7adc39020f0b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0531 19:25:19.604056  129550 cache.go:115] /home/jenkins/minikube-integration/16569-2389/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0531 19:25:19.604069  129550 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/16569-2389/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 103.039µs
	I0531 19:25:19.604081  129550 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/16569-2389/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0531 19:25:19.604093  129550 cache.go:107] acquiring lock: {Name:mk8fccbba98eb1702cef51c7850f17b093bea715 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0531 19:25:19.604125  129550 cache.go:115] /home/jenkins/minikube-integration/16569-2389/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.20.2 exists
	I0531 19:25:19.604136  129550 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.20.2" -> "/home/jenkins/minikube-integration/16569-2389/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.20.2" took 43.126µs
	I0531 19:25:19.604143  129550 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.20.2 -> /home/jenkins/minikube-integration/16569-2389/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.20.2 succeeded
	I0531 19:25:19.604153  129550 cache.go:107] acquiring lock: {Name:mkcb72e74476706329269bb5e468bfc2d57d9b5d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0531 19:25:19.604180  129550 cache.go:115] /home/jenkins/minikube-integration/16569-2389/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.20.2 exists
	I0531 19:25:19.604190  129550 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.20.2" -> "/home/jenkins/minikube-integration/16569-2389/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.20.2" took 37.793µs
	I0531 19:25:19.604198  129550 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.20.2 -> /home/jenkins/minikube-integration/16569-2389/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.20.2 succeeded
	I0531 19:25:19.604210  129550 cache.go:107] acquiring lock: {Name:mk8f4513ce2aea84ca1e6ce414d1dfbd1c3a832e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0531 19:25:19.604241  129550 cache.go:115] /home/jenkins/minikube-integration/16569-2389/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.20.2 exists
	I0531 19:25:19.604249  129550 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.20.2" -> "/home/jenkins/minikube-integration/16569-2389/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.20.2" took 39.843µs
	I0531 19:25:19.604255  129550 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.20.2 -> /home/jenkins/minikube-integration/16569-2389/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.20.2 succeeded
	I0531 19:25:19.604266  129550 cache.go:107] acquiring lock: {Name:mk2292ee62d0e7f8d023be18d02f0546c1cd97a6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0531 19:25:19.604296  129550 cache.go:115] /home/jenkins/minikube-integration/16569-2389/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.20.2 exists
	I0531 19:25:19.604304  129550 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.20.2" -> "/home/jenkins/minikube-integration/16569-2389/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.20.2" took 40.442µs
	I0531 19:25:19.604311  129550 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.20.2 -> /home/jenkins/minikube-integration/16569-2389/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.20.2 succeeded
	I0531 19:25:19.604319  129550 cache.go:107] acquiring lock: {Name:mkf3e47cfb9a25cbf9383b1ed7627945128b152f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0531 19:25:19.604350  129550 cache.go:115] /home/jenkins/minikube-integration/16569-2389/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2 exists
	I0531 19:25:19.604358  129550 cache.go:96] cache image "registry.k8s.io/pause:3.2" -> "/home/jenkins/minikube-integration/16569-2389/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2" took 39.45µs
	I0531 19:25:19.604364  129550 cache.go:80] save to tar file registry.k8s.io/pause:3.2 -> /home/jenkins/minikube-integration/16569-2389/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2 succeeded
	I0531 19:25:19.604376  129550 cache.go:107] acquiring lock: {Name:mk02f998900df685bacb6d0f85eccc031a990f1a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0531 19:25:19.604404  129550 cache.go:115] /home/jenkins/minikube-integration/16569-2389/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.13-0 exists
	I0531 19:25:19.604412  129550 cache.go:96] cache image "registry.k8s.io/etcd:3.4.13-0" -> "/home/jenkins/minikube-integration/16569-2389/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.13-0" took 37.916µs
	I0531 19:25:19.604419  129550 cache.go:80] save to tar file registry.k8s.io/etcd:3.4.13-0 -> /home/jenkins/minikube-integration/16569-2389/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.13-0 succeeded
	I0531 19:25:19.604429  129550 cache.go:107] acquiring lock: {Name:mk74a1c08aa1553d1a9cd0c0856a9f1935620cd7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0531 19:25:19.604458  129550 cache.go:115] /home/jenkins/minikube-integration/16569-2389/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.7.0 exists
	I0531 19:25:19.604466  129550 cache.go:96] cache image "registry.k8s.io/coredns:1.7.0" -> "/home/jenkins/minikube-integration/16569-2389/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.7.0" took 38.95µs
	I0531 19:25:19.604474  129550 cache.go:80] save to tar file registry.k8s.io/coredns:1.7.0 -> /home/jenkins/minikube-integration/16569-2389/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.7.0 succeeded
	I0531 19:25:19.604479  129550 cache.go:87] Successfully saved all images to host disk.
	I0531 19:25:19.627124  129550 fix.go:103] recreateIfNeeded on running-upgrade-862679: state=Running err=<nil>
	W0531 19:25:19.627148  129550 fix.go:129] unexpected machine state, will restart: <nil>
	I0531 19:25:19.630166  129550 out.go:177] * Updating the running docker "running-upgrade-862679" container ...
	I0531 19:25:19.632145  129550 machine.go:88] provisioning docker machine ...
	I0531 19:25:19.632190  129550 ubuntu.go:169] provisioning hostname "running-upgrade-862679"
	I0531 19:25:19.632256  129550 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-862679
	I0531 19:25:19.666770  129550 main.go:141] libmachine: Using SSH client type: native
	I0531 19:25:19.667470  129550 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x39eaa0] 0x3a1430 <nil>  [] 0s} 127.0.0.1 32969 <nil> <nil>}
	I0531 19:25:19.667495  129550 main.go:141] libmachine: About to run SSH command:
	sudo hostname running-upgrade-862679 && echo "running-upgrade-862679" | sudo tee /etc/hostname
	I0531 19:25:19.862869  129550 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-862679
	
	I0531 19:25:19.862961  129550 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-862679
	I0531 19:25:19.890723  129550 main.go:141] libmachine: Using SSH client type: native
	I0531 19:25:19.891307  129550 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x39eaa0] 0x3a1430 <nil>  [] 0s} 127.0.0.1 32969 <nil> <nil>}
	I0531 19:25:19.891335  129550 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\srunning-upgrade-862679' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 running-upgrade-862679/g' /etc/hosts;
				else 
					echo '127.0.1.1 running-upgrade-862679' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0531 19:25:20.060620  129550 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0531 19:25:20.060643  129550 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/16569-2389/.minikube CaCertPath:/home/jenkins/minikube-integration/16569-2389/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16569-2389/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16569-2389/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16569-2389/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16569-2389/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16569-2389/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16569-2389/.minikube}
	I0531 19:25:20.060667  129550 ubuntu.go:177] setting up certificates
	I0531 19:25:20.060677  129550 provision.go:83] configureAuth start
	I0531 19:25:20.060743  129550 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" running-upgrade-862679
	I0531 19:25:20.081480  129550 provision.go:138] copyHostCerts
	I0531 19:25:20.081565  129550 exec_runner.go:144] found /home/jenkins/minikube-integration/16569-2389/.minikube/ca.pem, removing ...
	I0531 19:25:20.081578  129550 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16569-2389/.minikube/ca.pem
	I0531 19:25:20.081665  129550 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16569-2389/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16569-2389/.minikube/ca.pem (1078 bytes)
	I0531 19:25:20.081781  129550 exec_runner.go:144] found /home/jenkins/minikube-integration/16569-2389/.minikube/cert.pem, removing ...
	I0531 19:25:20.081793  129550 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16569-2389/.minikube/cert.pem
	I0531 19:25:20.081824  129550 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16569-2389/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16569-2389/.minikube/cert.pem (1123 bytes)
	I0531 19:25:20.081894  129550 exec_runner.go:144] found /home/jenkins/minikube-integration/16569-2389/.minikube/key.pem, removing ...
	I0531 19:25:20.081906  129550 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16569-2389/.minikube/key.pem
	I0531 19:25:20.081934  129550 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16569-2389/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16569-2389/.minikube/key.pem (1679 bytes)
	I0531 19:25:20.082038  129550 provision.go:112] generating server cert: /home/jenkins/minikube-integration/16569-2389/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16569-2389/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16569-2389/.minikube/certs/ca-key.pem org=jenkins.running-upgrade-862679 san=[192.168.70.216 127.0.0.1 localhost 127.0.0.1 minikube running-upgrade-862679]
	I0531 19:25:20.384594  129550 provision.go:172] copyRemoteCerts
	I0531 19:25:20.384659  129550 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0531 19:25:20.384704  129550 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-862679
	I0531 19:25:20.406402  129550 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32969 SSHKeyPath:/home/jenkins/minikube-integration/16569-2389/.minikube/machines/running-upgrade-862679/id_rsa Username:docker}
	I0531 19:25:20.508219  129550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16569-2389/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0531 19:25:20.533946  129550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16569-2389/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0531 19:25:20.560119  129550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16569-2389/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0531 19:25:20.584094  129550 provision.go:86] duration metric: configureAuth took 523.405297ms
	I0531 19:25:20.584123  129550 ubuntu.go:193] setting minikube options for container-runtime
	I0531 19:25:20.584315  129550 config.go:182] Loaded profile config "running-upgrade-862679": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.20.2
	I0531 19:25:20.584433  129550 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-862679
	I0531 19:25:20.605154  129550 main.go:141] libmachine: Using SSH client type: native
	I0531 19:25:20.605597  129550 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x39eaa0] 0x3a1430 <nil>  [] 0s} 127.0.0.1 32969 <nil> <nil>}
	I0531 19:25:20.605617  129550 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0531 19:25:21.286577  129550 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0531 19:25:21.286603  129550 machine.go:91] provisioned docker machine in 1.654442143s
	I0531 19:25:21.286614  129550 start.go:300] post-start starting for "running-upgrade-862679" (driver="docker")
	I0531 19:25:21.286621  129550 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0531 19:25:21.286688  129550 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0531 19:25:21.286770  129550 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-862679
	I0531 19:25:21.310671  129550 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32969 SSHKeyPath:/home/jenkins/minikube-integration/16569-2389/.minikube/machines/running-upgrade-862679/id_rsa Username:docker}
	I0531 19:25:21.413183  129550 ssh_runner.go:195] Run: cat /etc/os-release
	I0531 19:25:21.417502  129550 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0531 19:25:21.417532  129550 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0531 19:25:21.417544  129550 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0531 19:25:21.417551  129550 info.go:137] Remote host: Ubuntu 20.04.1 LTS
	I0531 19:25:21.417562  129550 filesync.go:126] Scanning /home/jenkins/minikube-integration/16569-2389/.minikube/addons for local assets ...
	I0531 19:25:21.417623  129550 filesync.go:126] Scanning /home/jenkins/minikube-integration/16569-2389/.minikube/files for local assets ...
	I0531 19:25:21.417716  129550 filesync.go:149] local asset: /home/jenkins/minikube-integration/16569-2389/.minikube/files/etc/ssl/certs/78042.pem -> 78042.pem in /etc/ssl/certs
	I0531 19:25:21.417824  129550 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0531 19:25:21.427424  129550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16569-2389/.minikube/files/etc/ssl/certs/78042.pem --> /etc/ssl/certs/78042.pem (1708 bytes)
	I0531 19:25:21.454243  129550 start.go:303] post-start completed in 167.612828ms
	I0531 19:25:21.454358  129550 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0531 19:25:21.454424  129550 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-862679
	I0531 19:25:21.474850  129550 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32969 SSHKeyPath:/home/jenkins/minikube-integration/16569-2389/.minikube/machines/running-upgrade-862679/id_rsa Username:docker}
	I0531 19:25:21.574223  129550 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0531 19:25:21.580290  129550 fix.go:57] fixHost completed within 1.976811201s
	I0531 19:25:21.580315  129550 start.go:83] releasing machines lock for "running-upgrade-862679", held for 1.976855221s
	I0531 19:25:21.580390  129550 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" running-upgrade-862679
	I0531 19:25:21.604939  129550 ssh_runner.go:195] Run: cat /version.json
	I0531 19:25:21.604994  129550 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-862679
	I0531 19:25:21.605240  129550 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0531 19:25:21.605297  129550 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-862679
	I0531 19:25:21.630483  129550 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32969 SSHKeyPath:/home/jenkins/minikube-integration/16569-2389/.minikube/machines/running-upgrade-862679/id_rsa Username:docker}
	I0531 19:25:21.630818  129550 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32969 SSHKeyPath:/home/jenkins/minikube-integration/16569-2389/.minikube/machines/running-upgrade-862679/id_rsa Username:docker}
	W0531 19:25:21.732203  129550 start.go:414] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0531 19:25:21.732294  129550 ssh_runner.go:195] Run: systemctl --version
	I0531 19:25:21.814503  129550 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0531 19:25:21.922441  129550 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0531 19:25:21.928896  129550 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0531 19:25:21.979693  129550 cni.go:220] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0531 19:25:21.979772  129550 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0531 19:25:22.029111  129550 cni.go:261] disabled [/etc/cni/net.d/100-crio-bridge.conf, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0531 19:25:22.029138  129550 start.go:481] detecting cgroup driver to use...
	I0531 19:25:22.029170  129550 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0531 19:25:22.029224  129550 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0531 19:25:22.063746  129550 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0531 19:25:22.077262  129550 docker.go:193] disabling cri-docker service (if available) ...
	I0531 19:25:22.077328  129550 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0531 19:25:22.092925  129550 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0531 19:25:22.107062  129550 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	W0531 19:25:22.121663  129550 docker.go:203] Failed to disable socket "cri-docker.socket" (might be ok): sudo systemctl disable cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Failed to disable unit: Unit file cri-docker.socket does not exist.
	I0531 19:25:22.121738  129550 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0531 19:25:22.274214  129550 docker.go:209] disabling docker service ...
	I0531 19:25:22.274295  129550 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0531 19:25:22.288275  129550 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0531 19:25:22.301715  129550 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0531 19:25:22.453868  129550 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0531 19:25:22.627286  129550 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0531 19:25:22.645196  129550 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0531 19:25:22.665280  129550 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0531 19:25:22.665394  129550 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0531 19:25:22.678955  129550 out.go:177] 
	W0531 19:25:22.680791  129550 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 2
	stdout:
	
	stderr:
	sed: can't read /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 2
	stdout:
	
	stderr:
	sed: can't read /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	W0531 19:25:22.680815  129550 out.go:239] * 
	* 
	W0531 19:25:22.682006  129550 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0531 19:25:22.684699  129550 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:144: upgrade from v1.17.0 to HEAD failed: out/minikube-linux-arm64 start -p running-upgrade-862679 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 90
panic.go:522: *** TestRunningBinaryUpgrade FAILED at 2023-05-31 19:25:22.716863213 +0000 UTC m=+2479.349771973
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestRunningBinaryUpgrade]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect running-upgrade-862679
helpers_test.go:235: (dbg) docker inspect running-upgrade-862679:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "b07c24a9e68fbf6e057907a3714cbffe0803853ba0bb41f78374ce21cf41e083",
	        "Created": "2023-05-31T19:24:33.086768922Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 126043,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-05-31T19:24:33.599353611Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:9b79b8263a5873a7b57b8bb7698df1f71e90108b3174dea92dc6c576c0a9dbf9",
	        "ResolvConfPath": "/var/lib/docker/containers/b07c24a9e68fbf6e057907a3714cbffe0803853ba0bb41f78374ce21cf41e083/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/b07c24a9e68fbf6e057907a3714cbffe0803853ba0bb41f78374ce21cf41e083/hostname",
	        "HostsPath": "/var/lib/docker/containers/b07c24a9e68fbf6e057907a3714cbffe0803853ba0bb41f78374ce21cf41e083/hosts",
	        "LogPath": "/var/lib/docker/containers/b07c24a9e68fbf6e057907a3714cbffe0803853ba0bb41f78374ce21cf41e083/b07c24a9e68fbf6e057907a3714cbffe0803853ba0bb41f78374ce21cf41e083-json.log",
	        "Name": "/running-upgrade-862679",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "running-upgrade-862679:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "running-upgrade-862679",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/c7b5a13b439e72220b11bafca9b9d7b36a74ea24295bcf964b8b8bc92d17450b-init/diff:/var/lib/docker/overlay2/661cb7010ee6890477a936f9860959709113ceb7297cc0b4e079f7fea24d7400/diff:/var/lib/docker/overlay2/0a216ef6177cd5b5c31b27f5661fedc2ae1b9a86c41c350e6e52854ac48f0876/diff:/var/lib/docker/overlay2/19d4e4d5709bc282c4b06284fa98574e738cffd823fa0afc3fd719fcfbb9c5d7/diff:/var/lib/docker/overlay2/fad8bcaf109e0dca8cb00258338e1dd8f0fa259ef9de6c6d11d8d169fb92be50/diff:/var/lib/docker/overlay2/8da6277cfb32904e48fa11ad826de9c81915ed1710915dd1898b1dcfee67cc36/diff:/var/lib/docker/overlay2/affca7e8b47d22b0669cced051e3c35999460361b365f5f3768e141190ba1375/diff:/var/lib/docker/overlay2/9415d7cfbc1fda43810c648c08930b4eab742592db05c0a561f6a88809e497f9/diff:/var/lib/docker/overlay2/0a13e79faf7b3e8e10b7989db18e6b94785c20a0176cfe12655945b394dac273/diff:/var/lib/docker/overlay2/faa79511a27bbfe8b87eece0f4b0e69c86cd911e7de819847146bfc7e73ba41f/diff:/var/lib/docker/overlay2/f5e870
bd66fe72e811fb1f582a6e7fa685cedd148c10a512287c9d6e7f0c8d05/diff:/var/lib/docker/overlay2/4aba741b51d101211f26c54916c36b452a77a494e7a0313c688a4c5df75d0f04/diff:/var/lib/docker/overlay2/c5a0b81a3187181c644e6bba085a8b3e7de8299a1370883917dca8f13a58fc1c/diff:/var/lib/docker/overlay2/a7c3ea7c8e27b7fbc6a7f95116b8bdaf8acd4fd0b260cc609c6c952f7d4cf62c/diff:/var/lib/docker/overlay2/ea8bb4fec2c157faee709316d532a812bc7a4c1d1ed4a900a24ad87e066ee4a6/diff:/var/lib/docker/overlay2/9b2a335b80945b0a66b02f3475dad85306f8ac86a310e6c9a7d8bfe3074ce210/diff:/var/lib/docker/overlay2/99f90ab2d85392ed5fb26347099acca44af1ff22f0bcdcbb1313e4ed52cdc21c/diff:/var/lib/docker/overlay2/c45c4a7e8b7d84897b8bff816eeb3ce6ea520c1adc9164e1795b2b1b336614ec/diff:/var/lib/docker/overlay2/a20a0c3f9b3899b4c1ef409609edbc5a7cef4bd33ce62017e67b5cd79c45ddab/diff:/var/lib/docker/overlay2/8a80addbaf8e3c831daff4e2dc64cf0b6b44770b13afb1104e69d805f074a1fe/diff:/var/lib/docker/overlay2/be9ba128db96975890f8b70502a6fb42b0b04064e4c78830addde8e6dbea8f1f/diff:/var/lib/d
ocker/overlay2/2386298d292920c53bdc25f82c413aadee6254b7bc34c383d377c27ac73ef8a0/diff:/var/lib/docker/overlay2/055c4452bde77fb69a2f7b0959ceeab8144b6ad65f6b8acf4e9c35365ccffb94/diff:/var/lib/docker/overlay2/2e2f92bbfb222798b10e2f5133dea722123ae1cf4680bd40b3e60cff26e32585/diff:/var/lib/docker/overlay2/d502d2c9bbdeb20e9f9486a5fbbd7c2eabdc95a035916b3bd287b98c7bd69f55/diff:/var/lib/docker/overlay2/169eb6b3e0b3d08caaff41c6826ef2b01a15dce40575d9cad53df3ce7e005d9d/diff:/var/lib/docker/overlay2/a278b547d9c04efc6d9e70ec4212c55240f13980e3717d8c4cdbe83ca4f823e1/diff:/var/lib/docker/overlay2/10b83ea3ec0b1a5997195ec222d378335e504afb802ca902f02550ec96e46c82/diff:/var/lib/docker/overlay2/d2c17074adf15b499e928ffd6107eef93e44ca7dd50931bf50098bad0be51f02/diff:/var/lib/docker/overlay2/d4ace82964c48174a9132eb6411df798d9c4b5297026721a0a1bc3de83a529e4/diff:/var/lib/docker/overlay2/f411086f4d2e4c6346ac53dbfcc3bc26d7f482d937e2598b02aae2b3d252078f/diff:/var/lib/docker/overlay2/2ad0a6a9019f37a5654bb88c1dee88583cc57fff85d1d7d85b64801b129
6c19c/diff:/var/lib/docker/overlay2/7c7be78073210eca779c1d42c960c26cccff7ff162784f65a636568899740460/diff:/var/lib/docker/overlay2/6af71cc57e42bd76be6537c7fb3072810a9ec6f4c42f54ae44eca659102469d6/diff:/var/lib/docker/overlay2/f8dde1a05fa48dc1464fa3f97b68437f847a5e305869fb73d0356d2ed5f4a084/diff",
	                "MergedDir": "/var/lib/docker/overlay2/c7b5a13b439e72220b11bafca9b9d7b36a74ea24295bcf964b8b8bc92d17450b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/c7b5a13b439e72220b11bafca9b9d7b36a74ea24295bcf964b8b8bc92d17450b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/c7b5a13b439e72220b11bafca9b9d7b36a74ea24295bcf964b8b8bc92d17450b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "running-upgrade-862679",
	                "Source": "/var/lib/docker/volumes/running-upgrade-862679/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "running-upgrade-862679",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "running-upgrade-862679",
	                "name.minikube.sigs.k8s.io": "running-upgrade-862679",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "1797bea64c360397b5952c5f883d32dff1a0c4cf9a32933a4af7f781bfacc2fb",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32969"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32968"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32967"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32966"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/1797bea64c36",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "running-upgrade-862679": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.70.216"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "b07c24a9e68f",
	                        "running-upgrade-862679"
	                    ],
	                    "NetworkID": "c9c2f65b904c7d4cff571cdcb9771b50c5c031216755eb328d28709723352162",
	                    "EndpointID": "2d7d3d2190866325d1256c740c6e050aeced811be27c7af73b0ee2411559632e",
	                    "Gateway": "192.168.70.1",
	                    "IPAddress": "192.168.70.216",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:46:d8",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p running-upgrade-862679 -n running-upgrade-862679
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p running-upgrade-862679 -n running-upgrade-862679: exit status 4 (587.354563ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0531 19:25:23.232724  130194 status.go:415] kubeconfig endpoint: extract IP: "running-upgrade-862679" does not appear in /home/jenkins/minikube-integration/16569-2389/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 4 (may be ok)
helpers_test.go:241: "running-upgrade-862679" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:175: Cleaning up "running-upgrade-862679" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-862679
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-862679: (2.796468472s)
--- FAIL: TestRunningBinaryUpgrade (68.67s)

                                                
                                    
x
+
TestMissingContainerUpgrade (90.67s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:321: (dbg) Run:  /tmp/minikube-v1.9.1.2049153281.exe start -p missing-upgrade-915836 --memory=2200 --driver=docker  --container-runtime=crio
E0531 19:20:18.520946    7804 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/functional-747104/client.crt: no such file or directory
version_upgrade_test.go:321: (dbg) Non-zero exit: /tmp/minikube-v1.9.1.2049153281.exe start -p missing-upgrade-915836 --memory=2200 --driver=docker  --container-runtime=crio: exit status 70 (1m3.995983457s)

                                                
                                                
-- stdout --
	! [missing-upgrade-915836] minikube v1.9.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=16569
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/16569-2389/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/16569-2389/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Starting control plane node m01 in cluster missing-upgrade-915836
	* Pulling base image ...
	* Creating Kubernetes in docker container with (CPUs=2) (2 available), Memory=2200MB (7834MB available) ...
	* Deleting "missing-upgrade-915836" in docker ...
	* Creating Kubernetes in docker container with (CPUs=2) (2 available), Memory=2200MB (7834MB available) ...

                                                
                                                
-- /stdout --
** stderr ** 
	* minikube 1.30.1 is available! Download it: https://github.com/kubernetes/minikube/releases/tag/v1.30.1
	* To disable this notice, run: 'minikube config set WantUpdateNotification false'
	
	! StartHost failed, but will try again: creating host: create: creating: create kic node: check container "missing-upgrade-915836" running: temporary error created container "missing-upgrade-915836" is not running yet
	* 
	X Failed to start docker container. "minikube start -p missing-upgrade-915836" may fix it.: creating host: create: creating: create kic node: check container "missing-upgrade-915836" running: temporary error created container "missing-upgrade-915836" is not running yet
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:321: (dbg) Run:  /tmp/minikube-v1.9.1.2049153281.exe start -p missing-upgrade-915836 --memory=2200 --driver=docker  --container-runtime=crio
version_upgrade_test.go:321: (dbg) Non-zero exit: /tmp/minikube-v1.9.1.2049153281.exe start -p missing-upgrade-915836 --memory=2200 --driver=docker  --container-runtime=crio: exit status 70 (8.449681406s)

                                                
                                                
-- stdout --
	* [missing-upgrade-915836] minikube v1.9.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=16569
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/16569-2389/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/16569-2389/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting control plane node m01 in cluster missing-upgrade-915836
	* Pulling base image ...
	* Restarting existing docker container for "missing-upgrade-915836" ...
	* Restarting existing docker container for "missing-upgrade-915836" ...

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: provision: get ssh host-port: get host-bind port 22 for "missing-upgrade-915836", output 
	template parsing error: template: :1:4: executing "" at <index (index .NetworkSettings.Ports "22/tcp") 0>: error calling index: index of untyped nil
	: exit status 1
	* 
	X Failed to start docker container. "minikube start -p missing-upgrade-915836" may fix it.: provision: get ssh host-port: get host-bind port 22 for "missing-upgrade-915836", output 
	template parsing error: template: :1:4: executing "" at <index (index .NetworkSettings.Ports "22/tcp") 0>: error calling index: index of untyped nil
	: exit status 1
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:321: (dbg) Run:  /tmp/minikube-v1.9.1.2049153281.exe start -p missing-upgrade-915836 --memory=2200 --driver=docker  --container-runtime=crio
version_upgrade_test.go:321: (dbg) Non-zero exit: /tmp/minikube-v1.9.1.2049153281.exe start -p missing-upgrade-915836 --memory=2200 --driver=docker  --container-runtime=crio: exit status 70 (10.315038234s)

                                                
                                                
-- stdout --
	* [missing-upgrade-915836] minikube v1.9.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=16569
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/16569-2389/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/16569-2389/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting control plane node m01 in cluster missing-upgrade-915836
	* Pulling base image ...
	* Restarting existing docker container for "missing-upgrade-915836" ...
	* Restarting existing docker container for "missing-upgrade-915836" ...

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: provision: get ssh host-port: get host-bind port 22 for "missing-upgrade-915836", output 
	template parsing error: template: :1:4: executing "" at <index (index .NetworkSettings.Ports "22/tcp") 0>: error calling index: index of untyped nil
	: exit status 1
	* 
	X Failed to start docker container. "minikube start -p missing-upgrade-915836" may fix it.: provision: get ssh host-port: get host-bind port 22 for "missing-upgrade-915836", output 
	template parsing error: template: :1:4: executing "" at <index (index .NetworkSettings.Ports "22/tcp") 0>: error calling index: index of untyped nil
	: exit status 1
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:327: release start failed: exit status 70
panic.go:522: *** TestMissingContainerUpgrade FAILED at 2023-05-31 19:21:24.82129913 +0000 UTC m=+2241.454207906
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMissingContainerUpgrade]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect missing-upgrade-915836
helpers_test.go:235: (dbg) docker inspect missing-upgrade-915836:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "6f571b0367aded34932487769f17b227d4305acc3e29fa1ae449e166c605c971",
	        "Created": "2023-05-31T19:20:47.943757976Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "exited",
	            "Running": false,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 0,
	            "ExitCode": 1,
	            "Error": "",
	            "StartedAt": "2023-05-31T19:21:24.53496658Z",
	            "FinishedAt": "2023-05-31T19:21:24.53905334Z"
	        },
	        "Image": "sha256:11589cdc9ef4b67a64cc243dd3cf013e81ad02bbed105fc37dc07aa272044680",
	        "ResolvConfPath": "/var/lib/docker/containers/6f571b0367aded34932487769f17b227d4305acc3e29fa1ae449e166c605c971/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/6f571b0367aded34932487769f17b227d4305acc3e29fa1ae449e166c605c971/hostname",
	        "HostsPath": "/var/lib/docker/containers/6f571b0367aded34932487769f17b227d4305acc3e29fa1ae449e166c605c971/hosts",
	        "LogPath": "/var/lib/docker/containers/6f571b0367aded34932487769f17b227d4305acc3e29fa1ae449e166c605c971/6f571b0367aded34932487769f17b227d4305acc3e29fa1ae449e166c605c971-json.log",
	        "Name": "/missing-upgrade-915836",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "missing-upgrade-915836:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/6132a8e82da94a53331e95d0f9c4d2de594a8179a567a6a74bed9767fa5f0905-init/diff:/var/lib/docker/overlay2/5cb4fe33df16fb450694f7ca3938e6cc8d21fc4e9da162b3b89c07cfb6084fa0/diff:/var/lib/docker/overlay2/8434edd8e44352f7ed10e57c92d6bcf801f708f783ec0d678d792eebe92a38b9/diff:/var/lib/docker/overlay2/179eaa9b28a3c8b27c72821e946dfd3c5b95adecf7331ba34f92831a34c5f2b0/diff:/var/lib/docker/overlay2/eac21afcc05a199d71dd473e49d785e53e9afd5111315c326ad8718b85f5ee16/diff:/var/lib/docker/overlay2/10b874bfc0f6d7dcb93713d28bdf930185fd901df1d075dc0b74fb5bb1c388fc/diff:/var/lib/docker/overlay2/483598ca1d9d02c8ddec0e6657684f98f7b775321a6ac9e71b5090a2b31985c8/diff:/var/lib/docker/overlay2/958e8a773d57772f90119fe5ef64d6c7dfe0d2422cb36c86bef3a5a62772d34f/diff:/var/lib/docker/overlay2/4ed867cccff61f6fa6170eac5658f4671560d6150335af1ea6304618b75fcd21/diff:/var/lib/docker/overlay2/5f6fe41378612869290e4d6091d8b7d8939ec1ac4a12fffa283eb5078bc6cebf/diff:/var/lib/docker/overlay2/93e435
fb61b1508bd6ed030551eca49430995a7119258d05820416159195bf55/diff:/var/lib/docker/overlay2/f86699a0e9c989c62d4729fb3e77ac7abd25154ba52b6802c4a354747a85d0a9/diff:/var/lib/docker/overlay2/86d33fcabc9f76a4d0d100e6f68f99ca9053eec5041a501f8c390f28d09e5e32/diff:/var/lib/docker/overlay2/893dc5eff67cd636b0c0816c42a274d87fc04c576d7c236bdee26fc9d708c2a2/diff:/var/lib/docker/overlay2/b508dd013b954adbd580a0b4789372feaf540166faee0a5398811bcce97d5f73/diff:/var/lib/docker/overlay2/92fb4cb9f10631f7b6ec47edd61b300174ad22f50b5b35a2b32e690098970769/diff:/var/lib/docker/overlay2/b266197644b5167557b1f5dd7355d895a64b3cbd396ea24f052aa49f4c616630/diff:/var/lib/docker/overlay2/f4e4a78460998ca8c9414961db1b2d3b35111a8cec0c48be276f977e70a291f1/diff:/var/lib/docker/overlay2/3652570fb89f8c1ea19e2a861f2037e0a4043b655d8b04ed4eb925e1be97b4df/diff:/var/lib/docker/overlay2/61065cdc9ea57fc614f85551c0eb7bc1859b1c9f402a4a956e55bb144218b1d6/diff:/var/lib/docker/overlay2/f5f2884140723a403edb899108e5c4bb300871d368033cc9a39e235e2cf580a5/diff:/var/lib/d
ocker/overlay2/3ca1063a63184857def71dbaaf00ec6218480cd0a045d2d5750fd3740b016079/diff",
	                "MergedDir": "/var/lib/docker/overlay2/6132a8e82da94a53331e95d0f9c4d2de594a8179a567a6a74bed9767fa5f0905/merged",
	                "UpperDir": "/var/lib/docker/overlay2/6132a8e82da94a53331e95d0f9c4d2de594a8179a567a6a74bed9767fa5f0905/diff",
	                "WorkDir": "/var/lib/docker/overlay2/6132a8e82da94a53331e95d0f9c4d2de594a8179a567a6a74bed9767fa5f0905/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "missing-upgrade-915836",
	                "Source": "/var/lib/docker/volumes/missing-upgrade-915836/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "missing-upgrade-915836",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	                "container=docker"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.8@sha256:2f3380ebf1bb0c75b0b47160fd4e61b7b8fef0f1f32f9def108d3eada50a7a81",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "missing-upgrade-915836",
	                "name.minikube.sigs.k8s.io": "missing-upgrade-915836",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "cd35bdf4da71d7b14736fa7d08dbc251c09bc83cd6b4483627357ff64c34b824",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {},
	            "SandboxKey": "/var/run/docker/netns/cd35bdf4da71",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "bridge": {
	                    "IPAMConfig": null,
	                    "Links": null,
	                    "Aliases": null,
	                    "NetworkID": "ad835ee81e55db9983ad2d64e44200c895822a69f32d8b5ff05bc994810ca057",
	                    "EndpointID": "",
	                    "Gateway": "",
	                    "IPAddress": "",
	                    "IPPrefixLen": 0,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p missing-upgrade-915836 -n missing-upgrade-915836
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p missing-upgrade-915836 -n missing-upgrade-915836: exit status 7 (97.715119ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "missing-upgrade-915836" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "missing-upgrade-915836" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-915836
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-915836: (4.069440456s)
--- FAIL: TestMissingContainerUpgrade (90.67s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (164.44s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:195: (dbg) Run:  /tmp/minikube-v1.17.0.3190973104.exe start -p stopped-upgrade-577066 --memory=2200 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:195: (dbg) Done: /tmp/minikube-v1.17.0.3190973104.exe start -p stopped-upgrade-577066 --memory=2200 --vm-driver=docker  --container-runtime=crio: (2m4.833741967s)
version_upgrade_test.go:204: (dbg) Run:  /tmp/minikube-v1.17.0.3190973104.exe -p stopped-upgrade-577066 stop
version_upgrade_test.go:204: (dbg) Done: /tmp/minikube-v1.17.0.3190973104.exe -p stopped-upgrade-577066 stop: (20.266249296s)
version_upgrade_test.go:210: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-577066 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:210: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p stopped-upgrade-577066 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 90 (19.332792408s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-577066] minikube v1.30.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=16569
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/16569-2389/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/16569-2389/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.27.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.27.2
	* Using the docker driver based on existing profile
	* Starting control plane node stopped-upgrade-577066 in cluster stopped-upgrade-577066
	* Pulling base image ...
	* Restarting existing docker container for "stopped-upgrade-577066" ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0531 19:23:55.309390  123367 out.go:296] Setting OutFile to fd 1 ...
	I0531 19:23:55.309541  123367 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0531 19:23:55.309549  123367 out.go:309] Setting ErrFile to fd 2...
	I0531 19:23:55.309554  123367 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0531 19:23:55.309715  123367 root.go:336] Updating PATH: /home/jenkins/minikube-integration/16569-2389/.minikube/bin
	I0531 19:23:55.310525  123367 out.go:303] Setting JSON to false
	I0531 19:23:55.312572  123367 start.go:127] hostinfo: {"hostname":"ip-172-31-21-244","uptime":3981,"bootTime":1685557055,"procs":318,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1037-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0531 19:23:55.312668  123367 start.go:137] virtualization:  
	I0531 19:23:55.317134  123367 out.go:177] * [stopped-upgrade-577066] minikube v1.30.1 on Ubuntu 20.04 (arm64)
	I0531 19:23:55.320151  123367 preload.go:306] deleting older generation preload /home/jenkins/minikube-integration/16569-2389/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v8-v1.20.2-cri-o-overlay-arm64.tar.lz4
	I0531 19:23:55.322913  123367 out.go:177]   - MINIKUBE_LOCATION=16569
	I0531 19:23:55.324780  123367 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0531 19:23:55.322499  123367 notify.go:220] Checking for updates...
	I0531 19:23:55.326465  123367 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/16569-2389/kubeconfig
	I0531 19:23:55.328301  123367 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/16569-2389/.minikube
	I0531 19:23:55.330086  123367 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0531 19:23:55.331712  123367 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0531 19:23:55.333902  123367 config.go:182] Loaded profile config "stopped-upgrade-577066": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.20.2
	I0531 19:23:55.336720  123367 out.go:177] * Kubernetes 1.27.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.27.2
	I0531 19:23:55.338997  123367 driver.go:375] Setting default libvirt URI to qemu:///system
	I0531 19:23:55.364799  123367 docker.go:121] docker version: linux-24.0.2:Docker Engine - Community
	I0531 19:23:55.364899  123367 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0531 19:23:55.454072  123367 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:6 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:39 OomKillDisable:true NGoroutines:45 SystemTime:2023-05-31 19:23:55.44096017 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1037-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archit
ecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215166976 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> Se
rverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.5] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.18.1]] Warnings:<nil>}}
	I0531 19:23:55.454188  123367 docker.go:294] overlay module found
	I0531 19:23:55.456100  123367 out.go:177] * Using the docker driver based on existing profile
	I0531 19:23:55.458217  123367 start.go:297] selected driver: docker
	I0531 19:23:55.458860  123367 start.go:875] validating driver "docker" against &{Name:stopped-upgrade-577066 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:stopped-upgrade-577066 Namespace:default APIServerName:minikubeCA APIServer
Names:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.59.43 Port:8443 KubernetesVersion:v1.20.2 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath:
StaticIP:}
	I0531 19:23:55.458988  123367 start.go:886] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0531 19:23:55.459610  123367 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0531 19:23:55.518811  123367 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:6 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:39 OomKillDisable:true NGoroutines:45 SystemTime:2023-05-31 19:23:55.509490172 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1037-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215166976 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.5] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.18.1]] Warnings:<nil>}}
	I0531 19:23:55.519131  123367 cni.go:84] Creating CNI manager for ""
	I0531 19:23:55.519151  123367 cni.go:142] "docker" driver + "crio" runtime found, recommending kindnet
	I0531 19:23:55.519745  123367 start_flags.go:319] config:
	{Name:stopped-upgrade-577066 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:stopped-upgrade-577066 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket
: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.59.43 Port:8443 KubernetesVersion:v1.20.2 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0531 19:23:55.522570  123367 out.go:177] * Starting control plane node stopped-upgrade-577066 in cluster stopped-upgrade-577066
	I0531 19:23:55.524587  123367 cache.go:122] Beginning downloading kic base image for docker with crio
	I0531 19:23:55.526269  123367 out.go:177] * Pulling base image ...
	I0531 19:23:55.528566  123367 preload.go:132] Checking if preload exists for k8s version v1.20.2 and runtime crio
	I0531 19:23:55.528599  123367 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e in local docker daemon
	I0531 19:23:55.550270  123367 cache.go:150] Downloading gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e to local cache
	I0531 19:23:55.550444  123367 image.go:63] Checking for gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e in local cache directory
	I0531 19:23:55.550979  123367 image.go:118] Writing gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e to local cache
	W0531 19:23:55.603335  123367 preload.go:115] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.2/preloaded-images-k8s-v18-v1.20.2-cri-o-overlay-arm64.tar.lz4 status code: 404
	I0531 19:23:55.603492  123367 profile.go:148] Saving config to /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/stopped-upgrade-577066/config.json ...
	I0531 19:23:55.603619  123367 cache.go:107] acquiring lock: {Name:mk89f9e6ee8c851ef9ed99cf6ebe7adc39020f0b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0531 19:23:55.603715  123367 cache.go:115] /home/jenkins/minikube-integration/16569-2389/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0531 19:23:55.603731  123367 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/16569-2389/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 117.907µs
	I0531 19:23:55.603741  123367 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/16569-2389/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0531 19:23:55.603752  123367 cache.go:107] acquiring lock: {Name:mk8fccbba98eb1702cef51c7850f17b093bea715 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0531 19:23:55.603785  123367 cache.go:107] acquiring lock: {Name:mk2292ee62d0e7f8d023be18d02f0546c1cd97a6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0531 19:23:55.603820  123367 cache.go:107] acquiring lock: {Name:mkf3e47cfb9a25cbf9383b1ed7627945128b152f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0531 19:23:55.603849  123367 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.2
	I0531 19:23:55.603876  123367 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0531 19:23:55.606289  123367 cache.go:107] acquiring lock: {Name:mkcb72e74476706329269bb5e468bfc2d57d9b5d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0531 19:23:55.606322  123367 cache.go:107] acquiring lock: {Name:mk74a1c08aa1553d1a9cd0c0856a9f1935620cd7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0531 19:23:55.606538  123367 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.2
	I0531 19:23:55.606561  123367 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0531 19:23:55.606685  123367 cache.go:107] acquiring lock: {Name:mk8f4513ce2aea84ca1e6ce414d1dfbd1c3a832e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0531 19:23:55.606883  123367 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.2
	I0531 19:23:55.607078  123367 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.2
	I0531 19:23:55.606289  123367 cache.go:107] acquiring lock: {Name:mk02f998900df685bacb6d0f85eccc031a990f1a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0531 19:23:55.609066  123367 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0531 19:23:55.610339  123367 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.2
	I0531 19:23:55.610674  123367 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0531 19:23:55.611002  123367 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.2
	I0531 19:23:55.610844  123367 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0531 19:23:55.610882  123367 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.2
	I0531 19:23:55.610911  123367 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0531 19:23:55.610941  123367 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.2
	W0531 19:23:56.082035  123367 image.go:265] image registry.k8s.io/etcd:3.4.13-0 arch mismatch: want arm64 got amd64. fixing
	I0531 19:23:56.082152  123367 cache.go:162] opening:  /home/jenkins/minikube-integration/16569-2389/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.13-0
	I0531 19:23:56.082807  123367 cache.go:162] opening:  /home/jenkins/minikube-integration/16569-2389/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.20.2
	I0531 19:23:56.090832  123367 cache.go:162] opening:  /home/jenkins/minikube-integration/16569-2389/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.20.2
	I0531 19:23:56.100661  123367 cache.go:162] opening:  /home/jenkins/minikube-integration/16569-2389/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2
	W0531 19:23:56.114871  123367 image.go:265] image registry.k8s.io/kube-proxy:v1.20.2 arch mismatch: want arm64 got amd64. fixing
	I0531 19:23:56.114981  123367 cache.go:162] opening:  /home/jenkins/minikube-integration/16569-2389/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.20.2
	I0531 19:23:56.120210  123367 cache.go:162] opening:  /home/jenkins/minikube-integration/16569-2389/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.20.2
	W0531 19:23:56.148610  123367 image.go:265] image registry.k8s.io/coredns:1.7.0 arch mismatch: want arm64 got amd64. fixing
	I0531 19:23:56.148698  123367 cache.go:162] opening:  /home/jenkins/minikube-integration/16569-2389/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.7.0
	I0531 19:23:56.191634  123367 cache.go:157] /home/jenkins/minikube-integration/16569-2389/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2 exists
	I0531 19:23:56.191661  123367 cache.go:96] cache image "registry.k8s.io/pause:3.2" -> "/home/jenkins/minikube-integration/16569-2389/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2" took 587.841156ms
	I0531 19:23:56.191674  123367 cache.go:80] save to tar file registry.k8s.io/pause:3.2 -> /home/jenkins/minikube-integration/16569-2389/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2 succeeded
	    > gcr.io/k8s-minikube/kicbase...:  0 B [_______________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase...:  17.69 KiB / 287.99 MiB [>] 0.01% ? p/s ?I0531 19:23:56.492439  123367 cache.go:157] /home/jenkins/minikube-integration/16569-2389/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.7.0 exists
	I0531 19:23:56.492468  123367 cache.go:96] cache image "registry.k8s.io/coredns:1.7.0" -> "/home/jenkins/minikube-integration/16569-2389/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.7.0" took 886.14863ms
	I0531 19:23:56.492481  123367 cache.go:80] save to tar file registry.k8s.io/coredns:1.7.0 -> /home/jenkins/minikube-integration/16569-2389/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.7.0 succeeded
	I0531 19:23:56.541142  123367 cache.go:157] /home/jenkins/minikube-integration/16569-2389/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.20.2 exists
	I0531 19:23:56.541307  123367 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.20.2" -> "/home/jenkins/minikube-integration/16569-2389/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.20.2" took 934.619385ms
	I0531 19:23:56.541339  123367 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.20.2 -> /home/jenkins/minikube-integration/16569-2389/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.20.2 succeeded
	    > gcr.io/k8s-minikube/kicbase...:  2.19 MiB / 287.99 MiB [>_] 0.76% ? p/s ?I0531 19:23:56.807955  123367 cache.go:157] /home/jenkins/minikube-integration/16569-2389/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.20.2 exists
	I0531 19:23:56.808021  123367 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.20.2" -> "/home/jenkins/minikube-integration/16569-2389/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.20.2" took 1.201740189s
	I0531 19:23:56.808049  123367 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.20.2 -> /home/jenkins/minikube-integration/16569-2389/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.20.2 succeeded
	    > gcr.io/k8s-minikube/kicbase...:  16.02 MiB / 287.99 MiB  5.56% 26.77 MiB I0531 19:23:56.992637  123367 cache.go:157] /home/jenkins/minikube-integration/16569-2389/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.20.2 exists
	I0531 19:23:56.992778  123367 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.20.2" -> "/home/jenkins/minikube-integration/16569-2389/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.20.2" took 1.389019757s
	I0531 19:23:56.992900  123367 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.20.2 -> /home/jenkins/minikube-integration/16569-2389/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.20.2 succeeded
	    > gcr.io/k8s-minikube/kicbase...:  23.36 MiB / 287.99 MiB  8.11% 26.77 MiB     > gcr.io/k8s-minikube/kicbase...:  25.93 MiB / 287.99 MiB  9.00% 26.77 MiB     > gcr.io/k8s-minikube/kicbase...:  25.93 MiB / 287.99 MiB  9.00% 26.11 MiB     > gcr.io/k8s-minikube/kicbase...:  25.93 MiB / 287.99 MiB  9.00% 26.11 MiB I0531 19:23:57.705195  123367 cache.go:157] /home/jenkins/minikube-integration/16569-2389/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.20.2 exists
	I0531 19:23:57.705222  123367 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.20.2" -> "/home/jenkins/minikube-integration/16569-2389/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.20.2" took 2.101441085s
	I0531 19:23:57.705236  123367 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.20.2 -> /home/jenkins/minikube-integration/16569-2389/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.20.2 succeeded
	    > gcr.io/k8s-minikube/kicbase...:  25.93 MiB / 287.99 MiB  9.00% 26.11 MiB     > gcr.io/k8s-minikube/kicbase...:  25.94 MiB / 287.99 MiB  9.01% 24.43 MiB     > gcr.io/k8s-minikube/kicbase...:  26.77 MiB / 287.99 MiB  9.29% 24.43 MiB     > gcr.io/k8s-minikube/kicbase...:  43.24 MiB / 287.99 MiB  15.01% 24.43 MiBI0531 19:23:58.660356  123367 cache.go:157] /home/jenkins/minikube-integration/16569-2389/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.13-0 exists
	I0531 19:23:58.660381  123367 cache.go:96] cache image "registry.k8s.io/etcd:3.4.13-0" -> "/home/jenkins/minikube-integration/16569-2389/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.13-0" took 3.054099058s
	I0531 19:23:58.660394  123367 cache.go:80] save to tar file registry.k8s.io/etcd:3.4.13-0 -> /home/jenkins/minikube-integration/16569-2389/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.13-0 succeeded
	I0531 19:23:58.660409  123367 cache.go:87] Successfully saved all images to host disk.
	    > gcr.io/k8s-minikube/kicbase...:  43.95 MiB / 287.99 MiB  15.26% 24.79 MiB    > gcr.io/k8s-minikube/kicbase...:  54.34 MiB / 287.99 MiB  18.87% 24.79 MiB    > gcr.io/k8s-minikube/kicbase...:  67.79 MiB / 287.99 MiB  23.54% 24.79 MiB    > gcr.io/k8s-minikube/kicbase...:  67.96 MiB / 287.99 MiB  23.60% 25.77 MiB    > gcr.io/k8s-minikube/kicbase...:  75.97 MiB / 287.99 MiB  26.38% 25.77 MiB    > gcr.io/k8s-minikube/kicbase...:  91.79 MiB / 287.99 MiB  31.87% 25.77 MiB    > gcr.io/k8s-minikube/kicbase...:  109.45 MiB / 287.99 MiB  38.00% 28.57 Mi    > gcr.io/k8s-minikube/kicbase...:  125.54 MiB / 287.99 MiB  43.59% 28.57 Mi    > gcr.io/k8s-minikube/kicbase...:  140.03 MiB / 287.99 MiB  48.62% 28.57 Mi    > gcr.io/k8s-minikube/kicbase...:  158.82 MiB / 287.99 MiB  55.15% 32.04 Mi    > gcr.io/k8s-minikube/kicbase...:  171.72 MiB / 287.99 MiB  59.63% 32.04 Mi    > gcr.io/k8s-minikube/kicbase...:  175.00 MiB / 287.99 MiB  60.76% 32.04 Mi    > gcr.io/k8s-minikube/kicbase...:  195.73 MiB / 287.99 MiB  67.
96% 33.94 Mi    > gcr.io/k8s-minikube/kicbase...:  209.68 MiB / 287.99 MiB  72.81% 33.94 Mi    > gcr.io/k8s-minikube/kicbase...:  209.76 MiB / 287.99 MiB  72.84% 33.94 Mi    > gcr.io/k8s-minikube/kicbase...:  225.25 MiB / 287.99 MiB  78.21% 34.92 Mi    > gcr.io/k8s-minikube/kicbase...:  238.06 MiB / 287.99 MiB  82.66% 34.92 Mi    > gcr.io/k8s-minikube/kicbase...:  242.03 MiB / 287.99 MiB  84.04% 34.92 Mi    > gcr.io/k8s-minikube/kicbase...:  254.70 MiB / 287.99 MiB  88.44% 35.84 Mi    > gcr.io/k8s-minikube/kicbase...:  265.05 MiB / 287.99 MiB  92.03% 35.84 Mi    > gcr.io/k8s-minikube/kicbase...:  273.05 MiB / 287.99 MiB  94.81% 35.84 Mi    > gcr.io/k8s-minikube/kicbase...:  281.12 MiB / 287.99 MiB  97.61% 36.37 Mi    > gcr.io/k8s-minikube/kicbase...:  287.96 MiB / 287.99 MiB  99.99% 36.37 Mi    > gcr.io/k8s-minikube/kicbase...:  287.96 MiB / 287.99 MiB  99.99% 36.37 Mi    > gcr.io/k8s-minikube/kicbase...:  287.96 MiB / 287.99 MiB  99.99% 34.76 Mi    > gcr.io/k8s-minikube/kicbase...:  287.96 MiB / 287.99 MiB
99.99% 34.76 Mi    > gcr.io/k8s-minikube/kicbase...:  287.96 MiB / 287.99 MiB  99.99% 34.76 Mi    > gcr.io/k8s-minikube/kicbase...:  287.97 MiB / 287.99 MiB  99.99% 32.51 Mi    > gcr.io/k8s-minikube/kicbase...:  287.97 MiB / 287.99 MiB  99.99% 32.51 Mi    > gcr.io/k8s-minikube/kicbase...:  287.97 MiB / 287.99 MiB  99.99% 32.51 Mi    > gcr.io/k8s-minikube/kicbase...:  287.97 MiB / 287.99 MiB  99.99% 30.42 Mi    > gcr.io/k8s-minikube/kicbase...:  287.97 MiB / 287.99 MiB  99.99% 30.42 Mi    > gcr.io/k8s-minikube/kicbase...:  287.98 MiB / 287.99 MiB  100.00% 30.42 M    > gcr.io/k8s-minikube/kicbase...:  287.98 MiB / 287.99 MiB  100.00% 28.45 M    > gcr.io/k8s-minikube/kicbase...:  287.98 MiB / 287.99 MiB  100.00% 28.45 M    > gcr.io/k8s-minikube/kicbase...:  287.98 MiB / 287.99 MiB  100.00% 28.45 M    > gcr.io/k8s-minikube/kicbase...:  287.98 MiB / 287.99 MiB  100.00% 26.62 M    > gcr.io/k8s-minikube/kicbase...:  287.99 MiB / 287.99 MiB  100.00% 29.44 MI0531 19:24:06.056672  123367 cache.go:153] successfully save
d gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e as a tarball
	I0531 19:24:06.056682  123367 cache.go:163] Loading gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e from local cache
	I0531 19:24:08.221394  123367 cache.go:165] successfully loaded and using gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e from cached tarball
	I0531 19:24:08.221429  123367 cache.go:195] Successfully downloaded all kic artifacts
	I0531 19:24:08.221491  123367 start.go:364] acquiring machines lock for stopped-upgrade-577066: {Name:mk5ebb8600eefc19eceb0ed8a13b2029a0d2df49 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0531 19:24:08.221567  123367 start.go:368] acquired machines lock for "stopped-upgrade-577066" in 52.57µs
	I0531 19:24:08.221589  123367 start.go:96] Skipping create...Using existing machine configuration
	I0531 19:24:08.221611  123367 fix.go:55] fixHost starting: 
	I0531 19:24:08.221886  123367 cli_runner.go:164] Run: docker container inspect stopped-upgrade-577066 --format={{.State.Status}}
	I0531 19:24:08.241906  123367 fix.go:103] recreateIfNeeded on stopped-upgrade-577066: state=Stopped err=<nil>
	W0531 19:24:08.241945  123367 fix.go:129] unexpected machine state, will restart: <nil>
	I0531 19:24:08.245313  123367 out.go:177] * Restarting existing docker container for "stopped-upgrade-577066" ...
	I0531 19:24:08.247498  123367 cli_runner.go:164] Run: docker start stopped-upgrade-577066
	I0531 19:24:08.557726  123367 cli_runner.go:164] Run: docker container inspect stopped-upgrade-577066 --format={{.State.Status}}
	I0531 19:24:08.582312  123367 kic.go:426] container "stopped-upgrade-577066" state is running.
	I0531 19:24:08.582819  123367 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" stopped-upgrade-577066
	I0531 19:24:08.605736  123367 profile.go:148] Saving config to /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/stopped-upgrade-577066/config.json ...
	I0531 19:24:08.605958  123367 machine.go:88] provisioning docker machine ...
	I0531 19:24:08.605973  123367 ubuntu.go:169] provisioning hostname "stopped-upgrade-577066"
	I0531 19:24:08.606026  123367 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-577066
	I0531 19:24:08.628283  123367 main.go:141] libmachine: Using SSH client type: native
	I0531 19:24:08.628736  123367 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x39eaa0] 0x3a1430 <nil>  [] 0s} 127.0.0.1 32965 <nil> <nil>}
	I0531 19:24:08.628748  123367 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-577066 && echo "stopped-upgrade-577066" | sudo tee /etc/hostname
	I0531 19:24:08.629415  123367 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:43686->127.0.0.1:32965: read: connection reset by peer
	I0531 19:24:11.782961  123367 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-577066
	
	I0531 19:24:11.783048  123367 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-577066
	I0531 19:24:11.802130  123367 main.go:141] libmachine: Using SSH client type: native
	I0531 19:24:11.802574  123367 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x39eaa0] 0x3a1430 <nil>  [] 0s} 127.0.0.1 32965 <nil> <nil>}
	I0531 19:24:11.802599  123367 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-577066' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-577066/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-577066' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0531 19:24:11.944057  123367 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0531 19:24:11.944081  123367 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/16569-2389/.minikube CaCertPath:/home/jenkins/minikube-integration/16569-2389/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16569-2389/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16569-2389/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16569-2389/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16569-2389/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16569-2389/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16569-2389/.minikube}
	I0531 19:24:11.944103  123367 ubuntu.go:177] setting up certificates
	I0531 19:24:11.944112  123367 provision.go:83] configureAuth start
	I0531 19:24:11.944179  123367 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" stopped-upgrade-577066
	I0531 19:24:11.961697  123367 provision.go:138] copyHostCerts
	I0531 19:24:11.961760  123367 exec_runner.go:144] found /home/jenkins/minikube-integration/16569-2389/.minikube/key.pem, removing ...
	I0531 19:24:11.961769  123367 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16569-2389/.minikube/key.pem
	I0531 19:24:11.961841  123367 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16569-2389/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16569-2389/.minikube/key.pem (1679 bytes)
	I0531 19:24:11.961938  123367 exec_runner.go:144] found /home/jenkins/minikube-integration/16569-2389/.minikube/ca.pem, removing ...
	I0531 19:24:11.961943  123367 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16569-2389/.minikube/ca.pem
	I0531 19:24:11.961964  123367 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16569-2389/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16569-2389/.minikube/ca.pem (1078 bytes)
	I0531 19:24:11.962017  123367 exec_runner.go:144] found /home/jenkins/minikube-integration/16569-2389/.minikube/cert.pem, removing ...
	I0531 19:24:11.962021  123367 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16569-2389/.minikube/cert.pem
	I0531 19:24:11.962040  123367 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16569-2389/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16569-2389/.minikube/cert.pem (1123 bytes)
	I0531 19:24:11.962080  123367 provision.go:112] generating server cert: /home/jenkins/minikube-integration/16569-2389/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16569-2389/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16569-2389/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-577066 san=[192.168.59.43 127.0.0.1 localhost 127.0.0.1 minikube stopped-upgrade-577066]
	I0531 19:24:12.749722  123367 provision.go:172] copyRemoteCerts
	I0531 19:24:12.749790  123367 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0531 19:24:12.749842  123367 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-577066
	I0531 19:24:12.774229  123367 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32965 SSHKeyPath:/home/jenkins/minikube-integration/16569-2389/.minikube/machines/stopped-upgrade-577066/id_rsa Username:docker}
	I0531 19:24:12.871775  123367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16569-2389/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0531 19:24:12.897437  123367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16569-2389/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0531 19:24:12.923639  123367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16569-2389/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0531 19:24:12.951326  123367 provision.go:86] duration metric: configureAuth took 1.007197504s
	I0531 19:24:12.951393  123367 ubuntu.go:193] setting minikube options for container-runtime
	I0531 19:24:12.951609  123367 config.go:182] Loaded profile config "stopped-upgrade-577066": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.20.2
	I0531 19:24:12.951775  123367 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-577066
	I0531 19:24:12.978468  123367 main.go:141] libmachine: Using SSH client type: native
	I0531 19:24:12.978966  123367 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x39eaa0] 0x3a1430 <nil>  [] 0s} 127.0.0.1 32965 <nil> <nil>}
	I0531 19:24:12.978983  123367 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0531 19:24:13.420987  123367 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0531 19:24:13.421017  123367 machine.go:91] provisioned docker machine in 4.815050473s
	I0531 19:24:13.421028  123367 start.go:300] post-start starting for "stopped-upgrade-577066" (driver="docker")
	I0531 19:24:13.421035  123367 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0531 19:24:13.421107  123367 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0531 19:24:13.421151  123367 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-577066
	I0531 19:24:13.455479  123367 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32965 SSHKeyPath:/home/jenkins/minikube-integration/16569-2389/.minikube/machines/stopped-upgrade-577066/id_rsa Username:docker}
	I0531 19:24:13.561866  123367 ssh_runner.go:195] Run: cat /etc/os-release
	I0531 19:24:13.566501  123367 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0531 19:24:13.566528  123367 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0531 19:24:13.566540  123367 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0531 19:24:13.566546  123367 info.go:137] Remote host: Ubuntu 20.04.1 LTS
	I0531 19:24:13.566556  123367 filesync.go:126] Scanning /home/jenkins/minikube-integration/16569-2389/.minikube/addons for local assets ...
	I0531 19:24:13.566627  123367 filesync.go:126] Scanning /home/jenkins/minikube-integration/16569-2389/.minikube/files for local assets ...
	I0531 19:24:13.566723  123367 filesync.go:149] local asset: /home/jenkins/minikube-integration/16569-2389/.minikube/files/etc/ssl/certs/78042.pem -> 78042.pem in /etc/ssl/certs
	I0531 19:24:13.566914  123367 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0531 19:24:13.578746  123367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16569-2389/.minikube/files/etc/ssl/certs/78042.pem --> /etc/ssl/certs/78042.pem (1708 bytes)
	I0531 19:24:13.604948  123367 start.go:303] post-start completed in 183.905656ms
	I0531 19:24:13.605030  123367 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0531 19:24:13.605074  123367 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-577066
	I0531 19:24:13.624763  123367 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32965 SSHKeyPath:/home/jenkins/minikube-integration/16569-2389/.minikube/machines/stopped-upgrade-577066/id_rsa Username:docker}
	I0531 19:24:13.721104  123367 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0531 19:24:13.726888  123367 fix.go:57] fixHost completed within 5.505279477s
	I0531 19:24:13.726912  123367 start.go:83] releasing machines lock for "stopped-upgrade-577066", held for 5.505334114s
	I0531 19:24:13.726981  123367 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" stopped-upgrade-577066
	I0531 19:24:13.749915  123367 ssh_runner.go:195] Run: cat /version.json
	I0531 19:24:13.749980  123367 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-577066
	I0531 19:24:13.750241  123367 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0531 19:24:13.750300  123367 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-577066
	I0531 19:24:13.772838  123367 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32965 SSHKeyPath:/home/jenkins/minikube-integration/16569-2389/.minikube/machines/stopped-upgrade-577066/id_rsa Username:docker}
	I0531 19:24:13.782823  123367 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32965 SSHKeyPath:/home/jenkins/minikube-integration/16569-2389/.minikube/machines/stopped-upgrade-577066/id_rsa Username:docker}
	W0531 19:24:13.871425  123367 start.go:414] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0531 19:24:13.871498  123367 ssh_runner.go:195] Run: systemctl --version
	I0531 19:24:13.947929  123367 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0531 19:24:14.044115  123367 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0531 19:24:14.050246  123367 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0531 19:24:14.070864  123367 cni.go:220] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0531 19:24:14.070943  123367 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0531 19:24:14.097642  123367 cni.go:261] disabled [/etc/cni/net.d/100-crio-bridge.conf, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0531 19:24:14.097710  123367 start.go:481] detecting cgroup driver to use...
	I0531 19:24:14.097759  123367 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0531 19:24:14.097869  123367 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0531 19:24:14.126300  123367 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0531 19:24:14.139706  123367 docker.go:193] disabling cri-docker service (if available) ...
	I0531 19:24:14.139789  123367 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0531 19:24:14.153638  123367 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0531 19:24:14.166510  123367 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	W0531 19:24:14.179938  123367 docker.go:203] Failed to disable socket "cri-docker.socket" (might be ok): sudo systemctl disable cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Failed to disable unit: Unit file cri-docker.socket does not exist.
	I0531 19:24:14.180001  123367 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0531 19:24:14.288038  123367 docker.go:209] disabling docker service ...
	I0531 19:24:14.288143  123367 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0531 19:24:14.301851  123367 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0531 19:24:14.314431  123367 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0531 19:24:14.420561  123367 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0531 19:24:14.533424  123367 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0531 19:24:14.547599  123367 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0531 19:24:14.565682  123367 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0531 19:24:14.565749  123367 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0531 19:24:14.579830  123367 out.go:177] 
	W0531 19:24:14.581761  123367 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 2
	stdout:
	
	stderr:
	sed: can't read /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 2
	stdout:
	
	stderr:
	sed: can't read /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	W0531 19:24:14.581787  123367 out.go:239] * 
	* 
	W0531 19:24:14.583282  123367 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0531 19:24:14.585110  123367 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:212: upgrade from v1.17.0 to HEAD failed: out/minikube-linux-arm64 start -p stopped-upgrade-577066 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 90
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (164.44s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (53.63s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-142925 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-142925 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (46.031503436s)
pause_test.go:100: expected the second start log output to include "The running cluster does not require reconfiguration" but got: 
-- stdout --
	* [pause-142925] minikube v1.30.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=16569
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/16569-2389/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/16569-2389/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting control plane node pause-142925 in cluster pause-142925
	* Pulling base image ...
	* Updating the running docker "pause-142925" container ...
	* Preparing Kubernetes v1.27.2 on CRI-O 1.24.5 ...
	* Configuring CNI (Container Networking Interface) ...
	* Enabled addons: 
	* Verifying Kubernetes components...
	* Done! kubectl is now configured to use "pause-142925" cluster and "default" namespace by default

                                                
                                                
-- /stdout --
** stderr ** 
	I0531 19:26:42.829010  135811 out.go:296] Setting OutFile to fd 1 ...
	I0531 19:26:42.829240  135811 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0531 19:26:42.829264  135811 out.go:309] Setting ErrFile to fd 2...
	I0531 19:26:42.829285  135811 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0531 19:26:42.829496  135811 root.go:336] Updating PATH: /home/jenkins/minikube-integration/16569-2389/.minikube/bin
	I0531 19:26:42.829902  135811 out.go:303] Setting JSON to false
	I0531 19:26:42.831125  135811 start.go:127] hostinfo: {"hostname":"ip-172-31-21-244","uptime":4148,"bootTime":1685557055,"procs":343,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1037-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0531 19:26:42.831228  135811 start.go:137] virtualization:  
	I0531 19:26:42.836052  135811 out.go:177] * [pause-142925] minikube v1.30.1 on Ubuntu 20.04 (arm64)
	I0531 19:26:42.838290  135811 out.go:177]   - MINIKUBE_LOCATION=16569
	I0531 19:26:42.838379  135811 notify.go:220] Checking for updates...
	I0531 19:26:42.841349  135811 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0531 19:26:42.845091  135811 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/16569-2389/kubeconfig
	I0531 19:26:42.847298  135811 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/16569-2389/.minikube
	I0531 19:26:42.849097  135811 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0531 19:26:42.850995  135811 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0531 19:26:42.853646  135811 config.go:182] Loaded profile config "pause-142925": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.2
	I0531 19:26:42.854323  135811 driver.go:375] Setting default libvirt URI to qemu:///system
	I0531 19:26:42.883252  135811 docker.go:121] docker version: linux-24.0.2:Docker Engine - Community
	I0531 19:26:42.883363  135811 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0531 19:26:43.030533  135811 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:6 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:43 OomKillDisable:true NGoroutines:55 SystemTime:2023-05-31 19:26:43.013080507 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1037-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215166976 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.5] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.18.1]] Warnings:<nil>}}
	I0531 19:26:43.030637  135811 docker.go:294] overlay module found
	I0531 19:26:43.033987  135811 out.go:177] * Using the docker driver based on existing profile
	I0531 19:26:43.036448  135811 start.go:297] selected driver: docker
	I0531 19:26:43.036464  135811 start.go:875] validating driver "docker" against &{Name:pause-142925 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:pause-142925 Namespace:default APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-c
reds:false storage-provisioner:false storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0531 19:26:43.036634  135811 start.go:886] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0531 19:26:43.036737  135811 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0531 19:26:43.174194  135811 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:6 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:43 OomKillDisable:true NGoroutines:55 SystemTime:2023-05-31 19:26:43.163279969 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1037-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215166976 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.5] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.18.1]] Warnings:<nil>}}
	I0531 19:26:43.174712  135811 cni.go:84] Creating CNI manager for ""
	I0531 19:26:43.174766  135811 cni.go:142] "docker" driver + "crio" runtime found, recommending kindnet
	I0531 19:26:43.174780  135811 start_flags.go:319] config:
	{Name:pause-142925 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:pause-142925 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISo
cket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false volumesna
pshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0531 19:26:43.177173  135811 out.go:177] * Starting control plane node pause-142925 in cluster pause-142925
	I0531 19:26:43.179598  135811 cache.go:122] Beginning downloading kic base image for docker with crio
	I0531 19:26:43.181346  135811 out.go:177] * Pulling base image ...
	I0531 19:26:43.183395  135811 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime crio
	I0531 19:26:43.183450  135811 preload.go:148] Found local preload: /home/jenkins/minikube-integration/16569-2389/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-cri-o-overlay-arm64.tar.lz4
	I0531 19:26:43.183461  135811 cache.go:57] Caching tarball of preloaded images
	I0531 19:26:43.183540  135811 preload.go:174] Found /home/jenkins/minikube-integration/16569-2389/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0531 19:26:43.183562  135811 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.2 on crio
	I0531 19:26:43.183733  135811 profile.go:148] Saving config to /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/pause-142925/config.json ...
	I0531 19:26:43.183993  135811 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 in local docker daemon
	I0531 19:26:43.206833  135811 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 in local docker daemon, skipping pull
	I0531 19:26:43.206855  135811 cache.go:145] gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 exists in daemon, skipping load
	I0531 19:26:43.206872  135811 cache.go:195] Successfully downloaded all kic artifacts
	I0531 19:26:43.206927  135811 start.go:364] acquiring machines lock for pause-142925: {Name:mk04c4614445160173a1e1cfca869c713691e79d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0531 19:26:43.206994  135811 start.go:368] acquired machines lock for "pause-142925" in 39.966µs
	I0531 19:26:43.207012  135811 start.go:96] Skipping create...Using existing machine configuration
	I0531 19:26:43.207018  135811 fix.go:55] fixHost starting: 
	I0531 19:26:43.207309  135811 cli_runner.go:164] Run: docker container inspect pause-142925 --format={{.State.Status}}
	I0531 19:26:43.230259  135811 fix.go:103] recreateIfNeeded on pause-142925: state=Running err=<nil>
	W0531 19:26:43.230291  135811 fix.go:129] unexpected machine state, will restart: <nil>
	I0531 19:26:43.232770  135811 out.go:177] * Updating the running docker "pause-142925" container ...
	I0531 19:26:43.234575  135811 machine.go:88] provisioning docker machine ...
	I0531 19:26:43.234631  135811 ubuntu.go:169] provisioning hostname "pause-142925"
	I0531 19:26:43.234716  135811 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-142925
	I0531 19:26:43.258694  135811 main.go:141] libmachine: Using SSH client type: native
	I0531 19:26:43.259246  135811 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x39eaa0] 0x3a1430 <nil>  [] 0s} 127.0.0.1 32974 <nil> <nil>}
	I0531 19:26:43.259263  135811 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-142925 && echo "pause-142925" | sudo tee /etc/hostname
	I0531 19:26:43.422217  135811 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-142925
	
	I0531 19:26:43.422310  135811 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-142925
	I0531 19:26:43.447412  135811 main.go:141] libmachine: Using SSH client type: native
	I0531 19:26:43.447940  135811 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x39eaa0] 0x3a1430 <nil>  [] 0s} 127.0.0.1 32974 <nil> <nil>}
	I0531 19:26:43.447979  135811 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-142925' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-142925/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-142925' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0531 19:26:43.589863  135811 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0531 19:26:43.589897  135811 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/16569-2389/.minikube CaCertPath:/home/jenkins/minikube-integration/16569-2389/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16569-2389/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16569-2389/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16569-2389/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16569-2389/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16569-2389/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16569-2389/.minikube}
	I0531 19:26:43.589923  135811 ubuntu.go:177] setting up certificates
	I0531 19:26:43.589938  135811 provision.go:83] configureAuth start
	I0531 19:26:43.590017  135811 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-142925
	I0531 19:26:43.627713  135811 provision.go:138] copyHostCerts
	I0531 19:26:43.627848  135811 exec_runner.go:144] found /home/jenkins/minikube-integration/16569-2389/.minikube/ca.pem, removing ...
	I0531 19:26:43.627858  135811 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16569-2389/.minikube/ca.pem
	I0531 19:26:43.627962  135811 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16569-2389/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16569-2389/.minikube/ca.pem (1078 bytes)
	I0531 19:26:43.628106  135811 exec_runner.go:144] found /home/jenkins/minikube-integration/16569-2389/.minikube/cert.pem, removing ...
	I0531 19:26:43.628115  135811 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16569-2389/.minikube/cert.pem
	I0531 19:26:43.628147  135811 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16569-2389/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16569-2389/.minikube/cert.pem (1123 bytes)
	I0531 19:26:43.628249  135811 exec_runner.go:144] found /home/jenkins/minikube-integration/16569-2389/.minikube/key.pem, removing ...
	I0531 19:26:43.628254  135811 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16569-2389/.minikube/key.pem
	I0531 19:26:43.628288  135811 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16569-2389/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16569-2389/.minikube/key.pem (1679 bytes)
	I0531 19:26:43.628351  135811 provision.go:112] generating server cert: /home/jenkins/minikube-integration/16569-2389/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16569-2389/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16569-2389/.minikube/certs/ca-key.pem org=jenkins.pause-142925 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube pause-142925]
	I0531 19:26:44.693937  135811 provision.go:172] copyRemoteCerts
	I0531 19:26:44.694005  135811 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0531 19:26:44.694059  135811 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-142925
	I0531 19:26:44.713413  135811 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32974 SSHKeyPath:/home/jenkins/minikube-integration/16569-2389/.minikube/machines/pause-142925/id_rsa Username:docker}
	I0531 19:26:44.810159  135811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16569-2389/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0531 19:26:44.842556  135811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16569-2389/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0531 19:26:44.883562  135811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16569-2389/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0531 19:26:44.912364  135811 provision.go:86] duration metric: configureAuth took 1.322413117s
	I0531 19:26:44.912437  135811 ubuntu.go:193] setting minikube options for container-runtime
	I0531 19:26:44.912700  135811 config.go:182] Loaded profile config "pause-142925": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.2
	I0531 19:26:44.912852  135811 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-142925
	I0531 19:26:44.940312  135811 main.go:141] libmachine: Using SSH client type: native
	I0531 19:26:44.940797  135811 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x39eaa0] 0x3a1430 <nil>  [] 0s} 127.0.0.1 32974 <nil> <nil>}
	I0531 19:26:44.940826  135811 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0531 19:26:50.509964  135811 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0531 19:26:50.509989  135811 machine.go:91] provisioned docker machine in 7.275397517s
	I0531 19:26:50.510000  135811 start.go:300] post-start starting for "pause-142925" (driver="docker")
	I0531 19:26:50.510007  135811 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0531 19:26:50.510117  135811 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0531 19:26:50.510232  135811 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-142925
	I0531 19:26:50.547201  135811 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32974 SSHKeyPath:/home/jenkins/minikube-integration/16569-2389/.minikube/machines/pause-142925/id_rsa Username:docker}
	I0531 19:26:50.653237  135811 ssh_runner.go:195] Run: cat /etc/os-release
	I0531 19:26:50.659966  135811 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0531 19:26:50.659998  135811 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0531 19:26:50.660011  135811 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0531 19:26:50.660018  135811 info.go:137] Remote host: Ubuntu 22.04.2 LTS
	I0531 19:26:50.660027  135811 filesync.go:126] Scanning /home/jenkins/minikube-integration/16569-2389/.minikube/addons for local assets ...
	I0531 19:26:50.660080  135811 filesync.go:126] Scanning /home/jenkins/minikube-integration/16569-2389/.minikube/files for local assets ...
	I0531 19:26:50.660168  135811 filesync.go:149] local asset: /home/jenkins/minikube-integration/16569-2389/.minikube/files/etc/ssl/certs/78042.pem -> 78042.pem in /etc/ssl/certs
	I0531 19:26:50.660270  135811 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0531 19:26:50.670813  135811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16569-2389/.minikube/files/etc/ssl/certs/78042.pem --> /etc/ssl/certs/78042.pem (1708 bytes)
	I0531 19:26:50.704995  135811 start.go:303] post-start completed in 194.980224ms
	I0531 19:26:50.705115  135811 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0531 19:26:50.705183  135811 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-142925
	I0531 19:26:50.742910  135811 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32974 SSHKeyPath:/home/jenkins/minikube-integration/16569-2389/.minikube/machines/pause-142925/id_rsa Username:docker}
	I0531 19:26:50.847832  135811 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0531 19:26:50.861725  135811 fix.go:57] fixHost completed within 7.654699921s
	I0531 19:26:50.861747  135811 start.go:83] releasing machines lock for "pause-142925", held for 7.654744926s
	I0531 19:26:50.861826  135811 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-142925
	I0531 19:26:50.888211  135811 ssh_runner.go:195] Run: cat /version.json
	I0531 19:26:50.888237  135811 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0531 19:26:50.888266  135811 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-142925
	I0531 19:26:50.888294  135811 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-142925
	I0531 19:26:50.946949  135811 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32974 SSHKeyPath:/home/jenkins/minikube-integration/16569-2389/.minikube/machines/pause-142925/id_rsa Username:docker}
	I0531 19:26:50.953887  135811 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32974 SSHKeyPath:/home/jenkins/minikube-integration/16569-2389/.minikube/machines/pause-142925/id_rsa Username:docker}
	I0531 19:26:51.047795  135811 ssh_runner.go:195] Run: systemctl --version
	I0531 19:26:51.222696  135811 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0531 19:26:51.408654  135811 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0531 19:26:51.414797  135811 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0531 19:26:51.425894  135811 cni.go:220] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0531 19:26:51.425990  135811 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0531 19:26:51.437463  135811 cni.go:258] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0531 19:26:51.437487  135811 start.go:481] detecting cgroup driver to use...
	I0531 19:26:51.437521  135811 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0531 19:26:51.437573  135811 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0531 19:26:51.452561  135811 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0531 19:26:51.468030  135811 docker.go:193] disabling cri-docker service (if available) ...
	I0531 19:26:51.468094  135811 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0531 19:26:51.483947  135811 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0531 19:26:51.498727  135811 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0531 19:26:51.625410  135811 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0531 19:26:51.768025  135811 docker.go:209] disabling docker service ...
	I0531 19:26:51.768095  135811 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0531 19:26:51.785338  135811 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0531 19:26:51.802710  135811 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0531 19:26:51.979197  135811 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0531 19:26:52.338214  135811 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0531 19:26:52.393202  135811 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0531 19:26:52.451670  135811 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0531 19:26:52.451789  135811 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0531 19:26:52.525232  135811 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0531 19:26:52.525351  135811 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0531 19:26:52.557596  135811 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0531 19:26:52.590180  135811 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0531 19:26:52.614693  135811 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0531 19:26:52.657916  135811 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0531 19:26:52.690481  135811 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0531 19:26:52.724800  135811 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0531 19:26:53.247647  135811 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0531 19:26:53.615790  135811 start.go:528] Will wait 60s for socket path /var/run/crio/crio.sock
	I0531 19:26:53.615858  135811 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0531 19:26:53.620535  135811 start.go:549] Will wait 60s for crictl version
	I0531 19:26:53.620599  135811 ssh_runner.go:195] Run: which crictl
	I0531 19:26:53.625086  135811 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0531 19:26:53.674487  135811 start.go:565] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.5
	RuntimeApiVersion:  v1
	I0531 19:26:53.674572  135811 ssh_runner.go:195] Run: crio --version
	I0531 19:26:53.727084  135811 ssh_runner.go:195] Run: crio --version
	I0531 19:26:53.779547  135811 out.go:177] * Preparing Kubernetes v1.27.2 on CRI-O 1.24.5 ...
	I0531 19:26:53.781075  135811 cli_runner.go:164] Run: docker network inspect pause-142925 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0531 19:26:53.799674  135811 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I0531 19:26:53.804428  135811 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime crio
	I0531 19:26:53.804506  135811 ssh_runner.go:195] Run: sudo crictl images --output json
	I0531 19:26:53.855702  135811 crio.go:496] all images are preloaded for cri-o runtime.
	I0531 19:26:53.855725  135811 crio.go:415] Images already preloaded, skipping extraction
	I0531 19:26:53.855789  135811 ssh_runner.go:195] Run: sudo crictl images --output json
	I0531 19:26:53.914805  135811 crio.go:496] all images are preloaded for cri-o runtime.
	I0531 19:26:53.914827  135811 cache_images.go:84] Images are preloaded, skipping loading
	I0531 19:26:53.914898  135811 ssh_runner.go:195] Run: crio config
	I0531 19:26:54.011010  135811 cni.go:84] Creating CNI manager for ""
	I0531 19:26:54.011048  135811 cni.go:142] "docker" driver + "crio" runtime found, recommending kindnet
	I0531 19:26:54.011062  135811 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0531 19:26:54.011084  135811 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.27.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-142925 NodeName:pause-142925 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0531 19:26:54.011298  135811 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-142925"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.27.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0531 19:26:54.011435  135811 kubeadm.go:971] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.27.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=pause-142925 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.27.2 ClusterName:pause-142925 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0531 19:26:54.011522  135811 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.27.2
	I0531 19:26:54.026703  135811 binaries.go:44] Found k8s binaries, skipping transfer
	I0531 19:26:54.026803  135811 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0531 19:26:54.039762  135811 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (422 bytes)
	I0531 19:26:54.062188  135811 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0531 19:26:54.087168  135811 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2093 bytes)
	I0531 19:26:54.117408  135811 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0531 19:26:54.122359  135811 certs.go:56] Setting up /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/pause-142925 for IP: 192.168.76.2
	I0531 19:26:54.122396  135811 certs.go:190] acquiring lock for shared ca certs: {Name:mk0147accf8b8da231d39646bdc89fced67451cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 19:26:54.122596  135811 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/16569-2389/.minikube/ca.key
	I0531 19:26:54.122657  135811 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/16569-2389/.minikube/proxy-client-ca.key
	I0531 19:26:54.122774  135811 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/pause-142925/client.key
	I0531 19:26:54.122849  135811 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/pause-142925/apiserver.key.31bdca25
	I0531 19:26:54.122899  135811 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/pause-142925/proxy-client.key
	I0531 19:26:54.123021  135811 certs.go:437] found cert: /home/jenkins/minikube-integration/16569-2389/.minikube/certs/home/jenkins/minikube-integration/16569-2389/.minikube/certs/7804.pem (1338 bytes)
	W0531 19:26:54.123058  135811 certs.go:433] ignoring /home/jenkins/minikube-integration/16569-2389/.minikube/certs/home/jenkins/minikube-integration/16569-2389/.minikube/certs/7804_empty.pem, impossibly tiny 0 bytes
	I0531 19:26:54.123071  135811 certs.go:437] found cert: /home/jenkins/minikube-integration/16569-2389/.minikube/certs/home/jenkins/minikube-integration/16569-2389/.minikube/certs/ca-key.pem (1675 bytes)
	I0531 19:26:54.123101  135811 certs.go:437] found cert: /home/jenkins/minikube-integration/16569-2389/.minikube/certs/home/jenkins/minikube-integration/16569-2389/.minikube/certs/ca.pem (1078 bytes)
	I0531 19:26:54.123127  135811 certs.go:437] found cert: /home/jenkins/minikube-integration/16569-2389/.minikube/certs/home/jenkins/minikube-integration/16569-2389/.minikube/certs/cert.pem (1123 bytes)
	I0531 19:26:54.123155  135811 certs.go:437] found cert: /home/jenkins/minikube-integration/16569-2389/.minikube/certs/home/jenkins/minikube-integration/16569-2389/.minikube/certs/key.pem (1679 bytes)
	I0531 19:26:54.123209  135811 certs.go:437] found cert: /home/jenkins/minikube-integration/16569-2389/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/16569-2389/.minikube/files/etc/ssl/certs/78042.pem (1708 bytes)
	I0531 19:26:54.123914  135811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/pause-142925/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0531 19:26:54.212668  135811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/pause-142925/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0531 19:26:54.297619  135811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/pause-142925/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0531 19:26:54.383726  135811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/pause-142925/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0531 19:26:54.457542  135811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16569-2389/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0531 19:26:54.494119  135811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16569-2389/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0531 19:26:54.549089  135811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16569-2389/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0531 19:26:54.588681  135811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16569-2389/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0531 19:26:54.624662  135811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16569-2389/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0531 19:26:54.660258  135811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16569-2389/.minikube/certs/7804.pem --> /usr/share/ca-certificates/7804.pem (1338 bytes)
	I0531 19:26:54.695096  135811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16569-2389/.minikube/files/etc/ssl/certs/78042.pem --> /usr/share/ca-certificates/78042.pem (1708 bytes)
	I0531 19:26:54.743345  135811 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0531 19:26:54.776184  135811 ssh_runner.go:195] Run: openssl version
	I0531 19:26:54.789154  135811 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0531 19:26:54.806601  135811 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0531 19:26:54.812086  135811 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 May 31 18:45 /usr/share/ca-certificates/minikubeCA.pem
	I0531 19:26:54.812210  135811 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0531 19:26:54.825794  135811 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0531 19:26:54.843380  135811 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7804.pem && ln -fs /usr/share/ca-certificates/7804.pem /etc/ssl/certs/7804.pem"
	I0531 19:26:54.855855  135811 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7804.pem
	I0531 19:26:54.861618  135811 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 May 31 18:52 /usr/share/ca-certificates/7804.pem
	I0531 19:26:54.861740  135811 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7804.pem
	I0531 19:26:54.875546  135811 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7804.pem /etc/ssl/certs/51391683.0"
	I0531 19:26:54.895738  135811 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/78042.pem && ln -fs /usr/share/ca-certificates/78042.pem /etc/ssl/certs/78042.pem"
	I0531 19:26:54.912751  135811 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/78042.pem
	I0531 19:26:54.918261  135811 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 May 31 18:52 /usr/share/ca-certificates/78042.pem
	I0531 19:26:54.918349  135811 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/78042.pem
	I0531 19:26:54.929930  135811 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/78042.pem /etc/ssl/certs/3ec20f2e.0"
	I0531 19:26:54.942442  135811 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0531 19:26:54.947136  135811 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0531 19:26:54.960592  135811 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0531 19:26:54.975152  135811 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0531 19:26:54.984069  135811 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0531 19:26:54.993394  135811 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0531 19:26:55.002453  135811 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0531 19:26:55.012966  135811 kubeadm.go:404] StartCluster: {Name:pause-142925 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:pause-142925 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServ
erIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-p
rovisioner:false storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0531 19:26:55.013163  135811 cri.go:53] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0531 19:26:55.013253  135811 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0531 19:26:55.083680  135811 cri.go:88] found id: "53d9487d7e1b556075195bfb0286d52f52853cdea8e81c656c257bcf3e6f01d3"
	I0531 19:26:55.083705  135811 cri.go:88] found id: "34090e829e11e53c79999781fb71ff7bf5c9378b9ed69c8ce4045b6264d718e3"
	I0531 19:26:55.083715  135811 cri.go:88] found id: "ebefff5a4557aa51ef59f346ac5bd484e9300fed7b41c4dfd142d34f42af2a35"
	I0531 19:26:55.083724  135811 cri.go:88] found id: "aecfb452ce5e36e1f9750b8b696e43d37fe95157384a205ac6924bc948c6c429"
	I0531 19:26:55.083761  135811 cri.go:88] found id: "478eaf89680befbce2bd6c495bd58bce3c8fc95be30d948c1b7a09dbeb666faf"
	I0531 19:26:55.083773  135811 cri.go:88] found id: "7ee1670fa5d41728bdabc13111b625c2ed395d205a748387291c212bf8c167df"
	I0531 19:26:55.083781  135811 cri.go:88] found id: "ccff2afb9f0d7350e36e7ec256289cccb2c5a8cfe9135602ab761d803ef779e5"
	I0531 19:26:55.083805  135811 cri.go:88] found id: ""
	I0531 19:26:55.083869  135811 ssh_runner.go:195] Run: sudo runc list -f json
	I0531 19:26:55.135526  135811 cri.go:115] JSON = [{"ociVersion":"1.0.2-dev","id":"34090e829e11e53c79999781fb71ff7bf5c9378b9ed69c8ce4045b6264d718e3","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/34090e829e11e53c79999781fb71ff7bf5c9378b9ed69c8ce4045b6264d718e3/userdata","rootfs":"/var/lib/containers/storage/overlay/c709267ea3c1f594faf0b1403b7f27f583ce48c0b8faf6bdfbba417253f576da/merged","created":"2023-05-31T19:26:52.625355087Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"e201ca0d","io.kubernetes.container.name":"kube-scheduler","io.kubernetes.container.restartCount":"1","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"e201ca0d\",\"io.kubernetes.container.restartCount\":\"1\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminatio
nMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"34090e829e11e53c79999781fb71ff7bf5c9378b9ed69c8ce4045b6264d718e3","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2023-05-31T19:26:52.351998414Z","io.kubernetes.cri-o.Image":"305d7ed1dae2877c3a80d434c5fb9f1aac1aa3d2431c36130a3fcd1970e93840","io.kubernetes.cri-o.ImageName":"registry.k8s.io/kube-scheduler:v1.27.2","io.kubernetes.cri-o.ImageRef":"305d7ed1dae2877c3a80d434c5fb9f1aac1aa3d2431c36130a3fcd1970e93840","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-scheduler\",\"io.kubernetes.pod.name\":\"kube-scheduler-pause-142925\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"0377833077f2dcde49733432b51808bd\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-scheduler-pause-142925_0377833077f2dcde49733432b51808bd/kube-scheduler/1.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-scheduler\",\"attempt\
":1}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/c709267ea3c1f594faf0b1403b7f27f583ce48c0b8faf6bdfbba417253f576da/merged","io.kubernetes.cri-o.Name":"k8s_kube-scheduler_kube-scheduler-pause-142925_kube-system_0377833077f2dcde49733432b51808bd_1","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/4c08723eca1fdb3c2b4c892c5597cbff05c1d4cfc6f8dd3dca121cc5de7cdd39/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"4c08723eca1fdb3c2b4c892c5597cbff05c1d4cfc6f8dd3dca121cc5de7cdd39","io.kubernetes.cri-o.SandboxName":"k8s_kube-scheduler-pause-142925_kube-system_0377833077f2dcde49733432b51808bd_0","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/0377833077f2dcde49733432b51808bd/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\
":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/0377833077f2dcde49733432b51808bd/containers/kube-scheduler/c6b7ee54\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/kubernetes/scheduler.conf\",\"host_path\":\"/etc/kubernetes/scheduler.conf\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-scheduler-pause-142925","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"0377833077f2dcde49733432b51808bd","kubernetes.io/config.hash":"0377833077f2dcde49733432b51808bd","kubernetes.io/config.seen":"2023-05-31T19:25:47.597618465Z","kubernetes.io/config.source":"file"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"478eaf89680befbce2bd6c495bd58bce3c8fc95be30d948c1b7a09dbeb666faf","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/478eaf89680befbce2bd6c495bd58bce3c8fc95be30d948c1b7a09dbeb666faf/us
erdata","rootfs":"/var/lib/containers/storage/overlay/aaf7035d617abc913ce351fbb70c79c27727785b1dd14723f9a6f706761b8be5/merged","created":"2023-05-31T19:26:52.657838913Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"6d52bee3","io.kubernetes.container.name":"kube-proxy","io.kubernetes.container.restartCount":"1","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"6d52bee3\",\"io.kubernetes.container.restartCount\":\"1\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"478eaf89680befbce2bd6c495bd58bce3c8fc95be30d948c1b7a09dbeb666faf","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2023-05-31T19:26:52.20448505Z","io.kubernetes.cr
i-o.Image":"29921a084542255eb81a1a660a603b1a24636d88b202f9010daae75fa32754c0","io.kubernetes.cri-o.ImageName":"registry.k8s.io/kube-proxy:v1.27.2","io.kubernetes.cri-o.ImageRef":"29921a084542255eb81a1a660a603b1a24636d88b202f9010daae75fa32754c0","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-proxy\",\"io.kubernetes.pod.name\":\"kube-proxy-hrhmq\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"41ccce12-4ef2-49e2-9bbd-a664a715e971\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-proxy-hrhmq_41ccce12-4ef2-49e2-9bbd-a664a715e971/kube-proxy/1.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-proxy\",\"attempt\":1}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/aaf7035d617abc913ce351fbb70c79c27727785b1dd14723f9a6f706761b8be5/merged","io.kubernetes.cri-o.Name":"k8s_kube-proxy_kube-proxy-hrhmq_kube-system_41ccce12-4ef2-49e2-9bbd-a664a715e971_1","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/f
b743709b673303bc91dca669507e1c2b76b82c343d4720bebf2c016c3cb1f79/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"fb743709b673303bc91dca669507e1c2b76b82c343d4720bebf2c016c3cb1f79","io.kubernetes.cri-o.SandboxName":"k8s_kube-proxy-hrhmq_kube-system_41ccce12-4ef2-49e2-9bbd-a664a715e971_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/run/xtables.lock\",\"host_path\":\"/run/xtables.lock\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/lib/modules\",\"host_path\":\"/lib/modules\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/41ccce12-4ef2-49e2-9bbd-a664a715e971/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/41c
cce12-4ef2-49e2-9bbd-a664a715e971/containers/kube-proxy/64483723\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/kube-proxy\",\"host_path\":\"/var/lib/kubelet/pods/41ccce12-4ef2-49e2-9bbd-a664a715e971/volumes/kubernetes.io~configmap/kube-proxy\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/41ccce12-4ef2-49e2-9bbd-a664a715e971/volumes/kubernetes.io~projected/kube-api-access-q78pt\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-proxy-hrhmq","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"41ccce12-4ef2-49e2-9bbd-a664a715e971","kubernetes.io/config.seen":"2023-05-31T19:26:08.251032319Z","kubernetes.io/config.source":"api"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"53d9487d7e1b556075195bfb0286d52f52853cdea8e81c656c257bc
f3e6f01d3","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/53d9487d7e1b556075195bfb0286d52f52853cdea8e81c656c257bcf3e6f01d3/userdata","rootfs":"/var/lib/containers/storage/overlay/471a2702074bd6b361cb3909c4c2d00ddea6bf200a59fb6147912516970209cc/merged","created":"2023-05-31T19:26:52.629128238Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"78054a79","io.kubernetes.container.name":"kube-apiserver","io.kubernetes.container.restartCount":"1","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"78054a79\",\"io.kubernetes.container.restartCount\":\"1\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"53d9487d7e1b556075195bfb0286d52
f52853cdea8e81c656c257bcf3e6f01d3","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2023-05-31T19:26:52.377075628Z","io.kubernetes.cri-o.Image":"72c9df6be7f1b997e4a31b5cb9aa7262e5278905af97e6a69e341e3f0f9bbaae","io.kubernetes.cri-o.ImageName":"registry.k8s.io/kube-apiserver:v1.27.2","io.kubernetes.cri-o.ImageRef":"72c9df6be7f1b997e4a31b5cb9aa7262e5278905af97e6a69e341e3f0f9bbaae","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-apiserver\",\"io.kubernetes.pod.name\":\"kube-apiserver-pause-142925\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"2805357746b6072ed457140ea9e58f3d\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-apiserver-pause-142925_2805357746b6072ed457140ea9e58f3d/kube-apiserver/1.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-apiserver\",\"attempt\":1}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/471a2702074bd6b361cb3909c4c2d00ddea6bf200a59fb6147912516970209cc/merge
d","io.kubernetes.cri-o.Name":"k8s_kube-apiserver_kube-apiserver-pause-142925_kube-system_2805357746b6072ed457140ea9e58f3d_1","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/c3149f0c18fe10a3233101fa22f241c9bb5d186d00b6f8fc1e61dea527ee47c8/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"c3149f0c18fe10a3233101fa22f241c9bb5d186d00b6f8fc1e61dea527ee47c8","io.kubernetes.cri-o.SandboxName":"k8s_kube-apiserver-pause-142925_kube-system_2805357746b6072ed457140ea9e58f3d_0","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/2805357746b6072ed457140ea9e58f3d/containers/kube-apiserver/2e3c3e42\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/ca-certificates\",\"host_path\":\"/etc/ca-certificates\",\"readonly\":true,
\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/2805357746b6072ed457140ea9e58f3d/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/share/ca-certificates\",\"host_path\":\"/usr/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/ssl/certs\",\"host_path\":\"/etc/ssl/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/certs\",\"host_path\":\"/var/lib/minikube/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/local/share/ca-certificates\",\"host_path\":\"/usr/local/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-apiserver-pause-142925","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"28053
57746b6072ed457140ea9e58f3d","kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.76.2:8443","kubernetes.io/config.hash":"2805357746b6072ed457140ea9e58f3d","kubernetes.io/config.seen":"2023-05-31T19:25:47.597611655Z","kubernetes.io/config.source":"file"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"7ee1670fa5d41728bdabc13111b625c2ed395d205a748387291c212bf8c167df","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/7ee1670fa5d41728bdabc13111b625c2ed395d205a748387291c212bf8c167df/userdata","rootfs":"/var/lib/containers/storage/overlay/4e924e7e1da9b430bcfde541c0fba146d4c99359f8ead2ee3ffeb8730afee8fc/merged","created":"2023-05-31T19:26:52.568482032Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"1acded71","io.kubernetes.container.name":"etcd","io.kubernetes.container.restartCount":"1","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cr
i-o.Annotations":"{\"io.kubernetes.container.hash\":\"1acded71\",\"io.kubernetes.container.restartCount\":\"1\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"7ee1670fa5d41728bdabc13111b625c2ed395d205a748387291c212bf8c167df","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2023-05-31T19:26:52.182120832Z","io.kubernetes.cri-o.Image":"24bc64e911039ecf00e263be2161797c758b7d82403ca5516ab64047a477f737","io.kubernetes.cri-o.ImageName":"registry.k8s.io/etcd:3.5.7-0","io.kubernetes.cri-o.ImageRef":"24bc64e911039ecf00e263be2161797c758b7d82403ca5516ab64047a477f737","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"etcd\",\"io.kubernetes.pod.name\":\"etcd-pause-142925\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"b4ecd371dbe5a74e478f62163a9abd20\"}","io.kubernetes.cr
i-o.LogPath":"/var/log/pods/kube-system_etcd-pause-142925_b4ecd371dbe5a74e478f62163a9abd20/etcd/1.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"etcd\",\"attempt\":1}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/4e924e7e1da9b430bcfde541c0fba146d4c99359f8ead2ee3ffeb8730afee8fc/merged","io.kubernetes.cri-o.Name":"k8s_etcd_etcd-pause-142925_kube-system_b4ecd371dbe5a74e478f62163a9abd20_1","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/178cab6c62ccf48d57ab82e8f97ffba10af5e65709f69ab001de13cd9f3d70ba/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"178cab6c62ccf48d57ab82e8f97ffba10af5e65709f69ab001de13cd9f3d70ba","io.kubernetes.cri-o.SandboxName":"k8s_etcd-pause-142925_kube-system_b4ecd371dbe5a74e478f62163a9abd20_0","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/hosts\",\"ho
st_path\":\"/var/lib/kubelet/pods/b4ecd371dbe5a74e478f62163a9abd20/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/b4ecd371dbe5a74e478f62163a9abd20/containers/etcd/d2d511a0\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/etcd\",\"host_path\":\"/var/lib/minikube/etcd\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/certs/etcd\",\"host_path\":\"/var/lib/minikube/certs/etcd\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"etcd-pause-142925","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"b4ecd371dbe5a74e478f62163a9abd20","kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.76.2:2379","kubernetes.io/config.hash":"b4ecd371dbe5a74e478f62163a9abd20","kubernetes.io/con
fig.seen":"2023-05-31T19:25:47.597619827Z","kubernetes.io/config.source":"file"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"aecfb452ce5e36e1f9750b8b696e43d37fe95157384a205ac6924bc948c6c429","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/aecfb452ce5e36e1f9750b8b696e43d37fe95157384a205ac6924bc948c6c429/userdata","rootfs":"/var/lib/containers/storage/overlay/a1a4406bb5381c788f84cbd5ed5be90c5c1cf353c3660620edf98afc924f4ec6/merged","created":"2023-05-31T19:26:52.637955235Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"28f6f299","io.kubernetes.container.name":"coredns","io.kubernetes.container.ports":"[{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}]","io.kubernetes.container.restartCount":"1","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMes
sagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"28f6f299\",\"io.kubernetes.container.ports\":\"[{\\\"name\\\":\\\"dns\\\",\\\"containerPort\\\":53,\\\"protocol\\\":\\\"UDP\\\"},{\\\"name\\\":\\\"dns-tcp\\\",\\\"containerPort\\\":53,\\\"protocol\\\":\\\"TCP\\\"},{\\\"name\\\":\\\"metrics\\\",\\\"containerPort\\\":9153,\\\"protocol\\\":\\\"TCP\\\"}]\",\"io.kubernetes.container.restartCount\":\"1\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"aecfb452ce5e36e1f9750b8b696e43d37fe95157384a205ac6924bc948c6c429","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2023-05-31T19:26:52.267590258Z","io.kubernetes.cri-o.IP.0":"10.244.0.2","io.kubernetes.cri-o.Image":"97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108","io.kubernetes.cri-o.ImageName":"registry.k8
s.io/coredns/coredns:v1.10.1","io.kubernetes.cri-o.ImageRef":"97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"coredns\",\"io.kubernetes.pod.name\":\"coredns-5d78c9869d-pkjvx\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"684be7a1-9260-4d7d-afe4-22eba3383872\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_coredns-5d78c9869d-pkjvx_684be7a1-9260-4d7d-afe4-22eba3383872/coredns/1.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"coredns\",\"attempt\":1}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/a1a4406bb5381c788f84cbd5ed5be90c5c1cf353c3660620edf98afc924f4ec6/merged","io.kubernetes.cri-o.Name":"k8s_coredns_coredns-5d78c9869d-pkjvx_kube-system_684be7a1-9260-4d7d-afe4-22eba3383872_1","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/16acc94ea4916f4eff611859730cd5bf5e8c05be2421a59e6e744d4040bb2529/userdata/resolv.conf","io.kubernetes.cri-
o.SandboxID":"16acc94ea4916f4eff611859730cd5bf5e8c05be2421a59e6e744d4040bb2529","io.kubernetes.cri-o.SandboxName":"k8s_coredns-5d78c9869d-pkjvx_kube-system_684be7a1-9260-4d7d-afe4-22eba3383872_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/coredns\",\"host_path\":\"/var/lib/kubelet/pods/684be7a1-9260-4d7d-afe4-22eba3383872/volumes/kubernetes.io~configmap/config-volume\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/684be7a1-9260-4d7d-afe4-22eba3383872/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/684be7a1-9260-4d7d-afe4-22eba3383872/containers/coredns/f73f2267\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\"
:\"/var/run/secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/684be7a1-9260-4d7d-afe4-22eba3383872/volumes/kubernetes.io~projected/kube-api-access-wrtlz\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"coredns-5d78c9869d-pkjvx","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"684be7a1-9260-4d7d-afe4-22eba3383872","kubernetes.io/config.seen":"2023-05-31T19:26:39.657639372Z","kubernetes.io/config.source":"api"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"ccff2afb9f0d7350e36e7ec256289cccb2c5a8cfe9135602ab761d803ef779e5","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/ccff2afb9f0d7350e36e7ec256289cccb2c5a8cfe9135602ab761d803ef779e5/userdata","rootfs":"/var/lib/containers/storage/overlay/104384c1ba086b4cac274daa3b57422349e36183d2c6dd3cffbbed524593dc2f/merged","created":"2023-05-31T19:26:52.521447246Z","annotations":{"io.container.manager":"cr
i-o","io.kubernetes.container.hash":"a65fe55","io.kubernetes.container.name":"kube-controller-manager","io.kubernetes.container.restartCount":"1","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"a65fe55\",\"io.kubernetes.container.restartCount\":\"1\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"ccff2afb9f0d7350e36e7ec256289cccb2c5a8cfe9135602ab761d803ef779e5","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2023-05-31T19:26:52.137736707Z","io.kubernetes.cri-o.Image":"2ee705380c3c59a538b853cbe9ae9886ebbd0001a4cea4add5adeea48e5f48d4","io.kubernetes.cri-o.ImageName":"registry.k8s.io/kube-controller-manager:v1.27.2","io.kubernetes.cri-o.ImageRef":"2ee7053
80c3c59a538b853cbe9ae9886ebbd0001a4cea4add5adeea48e5f48d4","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-controller-manager\",\"io.kubernetes.pod.name\":\"kube-controller-manager-pause-142925\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"6f087b5d8e5e75ed61e767c658e1fc58\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-controller-manager-pause-142925_6f087b5d8e5e75ed61e767c658e1fc58/kube-controller-manager/1.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-controller-manager\",\"attempt\":1}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/104384c1ba086b4cac274daa3b57422349e36183d2c6dd3cffbbed524593dc2f/merged","io.kubernetes.cri-o.Name":"k8s_kube-controller-manager_kube-controller-manager-pause-142925_kube-system_6f087b5d8e5e75ed61e767c658e1fc58_1","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/a91695c8fa14180aa391026ab5ffc7af8e16dd64f35f0938705c45e039ba7dbe/userdata/resolv.conf",
"io.kubernetes.cri-o.SandboxID":"a91695c8fa14180aa391026ab5ffc7af8e16dd64f35f0938705c45e039ba7dbe","io.kubernetes.cri-o.SandboxName":"k8s_kube-controller-manager-pause-142925_kube-system_6f087b5d8e5e75ed61e767c658e1fc58_0","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/ca-certificates\",\"host_path\":\"/etc/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/6f087b5d8e5e75ed61e767c658e1fc58/containers/kube-controller-manager/092317af\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/6f087b5d8e5e75ed61e767c658e1fc58/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/ssl/certs\",\"host
_path\":\"/etc/ssl/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/kubernetes/controller-manager.conf\",\"host_path\":\"/etc/kubernetes/controller-manager.conf\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/share/ca-certificates\",\"host_path\":\"/usr/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/certs\",\"host_path\":\"/var/lib/minikube/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/local/share/ca-certificates\",\"host_path\":\"/usr/local/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/libexec/kubernetes/kubelet-plugins/volume/exec\",\"host_path\":\"/usr/libexec/kubernetes/kubelet-plugins/volume/exec\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-controller-manager-paus
e-142925","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"6f087b5d8e5e75ed61e767c658e1fc58","kubernetes.io/config.hash":"6f087b5d8e5e75ed61e767c658e1fc58","kubernetes.io/config.seen":"2023-05-31T19:25:47.597617144Z","kubernetes.io/config.source":"file"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"ebefff5a4557aa51ef59f346ac5bd484e9300fed7b41c4dfd142d34f42af2a35","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/ebefff5a4557aa51ef59f346ac5bd484e9300fed7b41c4dfd142d34f42af2a35/userdata","rootfs":"/var/lib/containers/storage/overlay/9d57da5b3aa6c7618edd5a28d81c48dcc9c0e4cba5fc90652b42d515e2bcd6b2/merged","created":"2023-05-31T19:26:52.62374625Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"fcbd3db6","io.kubernetes.container.name":"kindnet-cni","io.kubernetes.container.restartCount":"1","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.containe
r.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"fcbd3db6\",\"io.kubernetes.container.restartCount\":\"1\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"ebefff5a4557aa51ef59f346ac5bd484e9300fed7b41c4dfd142d34f42af2a35","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2023-05-31T19:26:52.288980786Z","io.kubernetes.cri-o.Image":"b18bf71b941bae2e12db1c07e567ad14e4febbc778310a0fc64487f1ac877d79","io.kubernetes.cri-o.ImageName":"docker.io/kindest/kindnetd:v20230511-dc714da8","io.kubernetes.cri-o.ImageRef":"b18bf71b941bae2e12db1c07e567ad14e4febbc778310a0fc64487f1ac877d79","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kindnet-cni\",\"io.kubernetes.pod.name\":\"kindnet-tj2db\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kuber
netes.pod.uid\":\"78b57b57-65bd-42d1-9c09-929951cdcb97\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kindnet-tj2db_78b57b57-65bd-42d1-9c09-929951cdcb97/kindnet-cni/1.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kindnet-cni\",\"attempt\":1}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/9d57da5b3aa6c7618edd5a28d81c48dcc9c0e4cba5fc90652b42d515e2bcd6b2/merged","io.kubernetes.cri-o.Name":"k8s_kindnet-cni_kindnet-tj2db_kube-system_78b57b57-65bd-42d1-9c09-929951cdcb97_1","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/452b7cd4d17bb888e8f8a8503975bd1a5a3ba14adddf5383d6de894a0413f28f/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"452b7cd4d17bb888e8f8a8503975bd1a5a3ba14adddf5383d6de894a0413f28f","io.kubernetes.cri-o.SandboxName":"k8s_kindnet-tj2db_kube-system_78b57b57-65bd-42d1-9c09-929951cdcb97_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TT
Y":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/run/xtables.lock\",\"host_path\":\"/run/xtables.lock\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/lib/modules\",\"host_path\":\"/lib/modules\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/78b57b57-65bd-42d1-9c09-929951cdcb97/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/78b57b57-65bd-42d1-9c09-929951cdcb97/containers/kindnet-cni/a8f51c8b\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/cni/net.d\",\"host_path\":\"/etc/cni/net.d\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/78b57b57-65bd-42d1-9c09-929951cdcb97/volumes/kubernetes.io~p
rojected/kube-api-access-qgfnl\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kindnet-tj2db","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"78b57b57-65bd-42d1-9c09-929951cdcb97","kubernetes.io/config.seen":"2023-05-31T19:26:08.218879255Z","kubernetes.io/config.source":"api"},"owner":"root"}]
	I0531 19:26:55.136184  135811 cri.go:125] list returned 7 containers
	I0531 19:26:55.136223  135811 cri.go:128] container: {ID:34090e829e11e53c79999781fb71ff7bf5c9378b9ed69c8ce4045b6264d718e3 Status:stopped}
	I0531 19:26:55.136261  135811 cri.go:134] skipping {34090e829e11e53c79999781fb71ff7bf5c9378b9ed69c8ce4045b6264d718e3 stopped}: state = "stopped", want "paused"
	I0531 19:26:55.136284  135811 cri.go:128] container: {ID:478eaf89680befbce2bd6c495bd58bce3c8fc95be30d948c1b7a09dbeb666faf Status:stopped}
	I0531 19:26:55.136314  135811 cri.go:134] skipping {478eaf89680befbce2bd6c495bd58bce3c8fc95be30d948c1b7a09dbeb666faf stopped}: state = "stopped", want "paused"
	I0531 19:26:55.136351  135811 cri.go:128] container: {ID:53d9487d7e1b556075195bfb0286d52f52853cdea8e81c656c257bcf3e6f01d3 Status:stopped}
	I0531 19:26:55.136374  135811 cri.go:134] skipping {53d9487d7e1b556075195bfb0286d52f52853cdea8e81c656c257bcf3e6f01d3 stopped}: state = "stopped", want "paused"
	I0531 19:26:55.136396  135811 cri.go:128] container: {ID:7ee1670fa5d41728bdabc13111b625c2ed395d205a748387291c212bf8c167df Status:stopped}
	I0531 19:26:55.136429  135811 cri.go:134] skipping {7ee1670fa5d41728bdabc13111b625c2ed395d205a748387291c212bf8c167df stopped}: state = "stopped", want "paused"
	I0531 19:26:55.136457  135811 cri.go:128] container: {ID:aecfb452ce5e36e1f9750b8b696e43d37fe95157384a205ac6924bc948c6c429 Status:stopped}
	I0531 19:26:55.136480  135811 cri.go:134] skipping {aecfb452ce5e36e1f9750b8b696e43d37fe95157384a205ac6924bc948c6c429 stopped}: state = "stopped", want "paused"
	I0531 19:26:55.136513  135811 cri.go:128] container: {ID:ccff2afb9f0d7350e36e7ec256289cccb2c5a8cfe9135602ab761d803ef779e5 Status:stopped}
	I0531 19:26:55.136537  135811 cri.go:134] skipping {ccff2afb9f0d7350e36e7ec256289cccb2c5a8cfe9135602ab761d803ef779e5 stopped}: state = "stopped", want "paused"
	I0531 19:26:55.136565  135811 cri.go:128] container: {ID:ebefff5a4557aa51ef59f346ac5bd484e9300fed7b41c4dfd142d34f42af2a35 Status:stopped}
	I0531 19:26:55.136598  135811 cri.go:134] skipping {ebefff5a4557aa51ef59f346ac5bd484e9300fed7b41c4dfd142d34f42af2a35 stopped}: state = "stopped", want "paused"
	I0531 19:26:55.136680  135811 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0531 19:26:55.149923  135811 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0531 19:26:55.149947  135811 kubeadm.go:636] restartCluster start
	I0531 19:26:55.150034  135811 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0531 19:26:55.163323  135811 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0531 19:26:55.164082  135811 kubeconfig.go:92] found "pause-142925" server: "https://192.168.76.2:8443"
	I0531 19:26:55.165208  135811 kapi.go:59] client config for pause-142925: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/16569-2389/.minikube/profiles/pause-142925/client.crt", KeyFile:"/home/jenkins/minikube-integration/16569-2389/.minikube/profiles/pause-142925/client.key", CAFile:"/home/jenkins/minikube-integration/16569-2389/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(
nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x13dde10), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0531 19:26:55.166439  135811 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0531 19:26:55.185462  135811 api_server.go:166] Checking apiserver status ...
	I0531 19:26:55.185528  135811 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 19:26:55.200662  135811 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 19:26:55.701317  135811 api_server.go:166] Checking apiserver status ...
	I0531 19:26:55.701404  135811 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 19:26:55.713728  135811 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 19:26:56.206801  135811 api_server.go:166] Checking apiserver status ...
	I0531 19:26:56.206882  135811 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 19:26:56.281074  135811 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 19:26:56.701695  135811 api_server.go:166] Checking apiserver status ...
	I0531 19:26:56.701768  135811 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 19:26:56.716990  135811 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 19:26:57.201174  135811 api_server.go:166] Checking apiserver status ...
	I0531 19:26:57.201261  135811 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 19:26:57.214520  135811 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 19:26:57.700796  135811 api_server.go:166] Checking apiserver status ...
	I0531 19:26:57.700876  135811 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 19:26:57.719698  135811 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 19:26:58.201424  135811 api_server.go:166] Checking apiserver status ...
	I0531 19:26:58.201508  135811 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 19:26:58.215653  135811 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 19:26:58.701230  135811 api_server.go:166] Checking apiserver status ...
	I0531 19:26:58.701311  135811 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 19:26:58.714646  135811 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 19:26:59.201275  135811 api_server.go:166] Checking apiserver status ...
	I0531 19:26:59.201337  135811 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 19:26:59.217870  135811 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 19:26:59.701560  135811 api_server.go:166] Checking apiserver status ...
	I0531 19:26:59.701644  135811 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 19:26:59.714151  135811 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 19:27:00.200783  135811 api_server.go:166] Checking apiserver status ...
	I0531 19:27:00.200861  135811 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 19:27:00.214783  135811 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 19:27:00.701408  135811 api_server.go:166] Checking apiserver status ...
	I0531 19:27:00.701498  135811 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 19:27:00.716316  135811 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 19:27:01.200909  135811 api_server.go:166] Checking apiserver status ...
	I0531 19:27:01.201000  135811 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 19:27:01.216281  135811 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 19:27:01.700848  135811 api_server.go:166] Checking apiserver status ...
	I0531 19:27:01.700931  135811 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 19:27:01.716507  135811 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 19:27:02.201030  135811 api_server.go:166] Checking apiserver status ...
	I0531 19:27:02.201127  135811 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 19:27:02.215638  135811 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 19:27:02.700924  135811 api_server.go:166] Checking apiserver status ...
	I0531 19:27:02.701005  135811 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 19:27:02.714649  135811 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 19:27:03.201583  135811 api_server.go:166] Checking apiserver status ...
	I0531 19:27:03.201667  135811 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 19:27:03.218318  135811 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 19:27:03.700874  135811 api_server.go:166] Checking apiserver status ...
	I0531 19:27:03.700958  135811 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 19:27:03.713845  135811 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 19:27:04.201476  135811 api_server.go:166] Checking apiserver status ...
	I0531 19:27:04.201592  135811 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 19:27:04.219456  135811 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 19:27:04.700829  135811 api_server.go:166] Checking apiserver status ...
	I0531 19:27:04.700917  135811 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 19:27:04.714549  135811 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 19:27:05.186301  135811 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0531 19:27:05.186330  135811 kubeadm.go:1123] stopping kube-system containers ...
	I0531 19:27:05.186343  135811 cri.go:53] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0531 19:27:05.186409  135811 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0531 19:27:05.261708  135811 cri.go:88] found id: "b7eeb9e9931e50ff612422f3fc028a906b4392080447dc3fb66403d47e63ac4c"
	I0531 19:27:05.261734  135811 cri.go:88] found id: "53d9487d7e1b556075195bfb0286d52f52853cdea8e81c656c257bcf3e6f01d3"
	I0531 19:27:05.261740  135811 cri.go:88] found id: "34090e829e11e53c79999781fb71ff7bf5c9378b9ed69c8ce4045b6264d718e3"
	I0531 19:27:05.261745  135811 cri.go:88] found id: "ebefff5a4557aa51ef59f346ac5bd484e9300fed7b41c4dfd142d34f42af2a35"
	I0531 19:27:05.261749  135811 cri.go:88] found id: "aecfb452ce5e36e1f9750b8b696e43d37fe95157384a205ac6924bc948c6c429"
	I0531 19:27:05.261753  135811 cri.go:88] found id: "478eaf89680befbce2bd6c495bd58bce3c8fc95be30d948c1b7a09dbeb666faf"
	I0531 19:27:05.261758  135811 cri.go:88] found id: "7ee1670fa5d41728bdabc13111b625c2ed395d205a748387291c212bf8c167df"
	I0531 19:27:05.261762  135811 cri.go:88] found id: "ccff2afb9f0d7350e36e7ec256289cccb2c5a8cfe9135602ab761d803ef779e5"
	I0531 19:27:05.261766  135811 cri.go:88] found id: ""
	I0531 19:27:05.261771  135811 cri.go:233] Stopping containers: [b7eeb9e9931e50ff612422f3fc028a906b4392080447dc3fb66403d47e63ac4c 53d9487d7e1b556075195bfb0286d52f52853cdea8e81c656c257bcf3e6f01d3 34090e829e11e53c79999781fb71ff7bf5c9378b9ed69c8ce4045b6264d718e3 ebefff5a4557aa51ef59f346ac5bd484e9300fed7b41c4dfd142d34f42af2a35 aecfb452ce5e36e1f9750b8b696e43d37fe95157384a205ac6924bc948c6c429 478eaf89680befbce2bd6c495bd58bce3c8fc95be30d948c1b7a09dbeb666faf 7ee1670fa5d41728bdabc13111b625c2ed395d205a748387291c212bf8c167df ccff2afb9f0d7350e36e7ec256289cccb2c5a8cfe9135602ab761d803ef779e5]
	I0531 19:27:05.261829  135811 ssh_runner.go:195] Run: which crictl
	I0531 19:27:05.268997  135811 ssh_runner.go:195] Run: sudo /usr/bin/crictl stop --timeout=10 b7eeb9e9931e50ff612422f3fc028a906b4392080447dc3fb66403d47e63ac4c 53d9487d7e1b556075195bfb0286d52f52853cdea8e81c656c257bcf3e6f01d3 34090e829e11e53c79999781fb71ff7bf5c9378b9ed69c8ce4045b6264d718e3 ebefff5a4557aa51ef59f346ac5bd484e9300fed7b41c4dfd142d34f42af2a35 aecfb452ce5e36e1f9750b8b696e43d37fe95157384a205ac6924bc948c6c429 478eaf89680befbce2bd6c495bd58bce3c8fc95be30d948c1b7a09dbeb666faf 7ee1670fa5d41728bdabc13111b625c2ed395d205a748387291c212bf8c167df ccff2afb9f0d7350e36e7ec256289cccb2c5a8cfe9135602ab761d803ef779e5
	I0531 19:27:05.712940  135811 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0531 19:27:05.817941  135811 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0531 19:27:05.829218  135811 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5639 May 31 19:25 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5656 May 31 19:25 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 1987 May 31 19:25 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5600 May 31 19:25 /etc/kubernetes/scheduler.conf
	
	I0531 19:27:05.829286  135811 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0531 19:27:05.840324  135811 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0531 19:27:05.850978  135811 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0531 19:27:05.861694  135811 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0531 19:27:05.861759  135811 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0531 19:27:05.871942  135811 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0531 19:27:05.882564  135811 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0531 19:27:05.882627  135811 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0531 19:27:05.893109  135811 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0531 19:27:05.903935  135811 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0531 19:27:05.903957  135811 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0531 19:27:05.969094  135811 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0531 19:27:07.670281  135811 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.701109858s)
	I0531 19:27:07.670312  135811 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0531 19:27:07.885796  135811 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0531 19:27:07.969022  135811 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0531 19:27:08.077786  135811 api_server.go:52] waiting for apiserver process to appear ...
	I0531 19:27:08.077853  135811 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 19:27:08.594657  135811 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 19:27:09.094785  135811 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 19:27:09.124362  135811 api_server.go:72] duration metric: took 1.046577778s to wait for apiserver process to appear ...
	I0531 19:27:09.124387  135811 api_server.go:88] waiting for apiserver healthz status ...
	I0531 19:27:09.124404  135811 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0531 19:27:13.292714  135811 api_server.go:279] https://192.168.76.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0531 19:27:13.292746  135811 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0531 19:27:13.793373  135811 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0531 19:27:13.816934  135811 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0531 19:27:13.816964  135811 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0531 19:27:14.293325  135811 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0531 19:27:14.322238  135811 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0531 19:27:14.322266  135811 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0531 19:27:14.792812  135811 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0531 19:27:14.810290  135811 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0531 19:27:14.810319  135811 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0531 19:27:15.293047  135811 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0531 19:27:15.306612  135811 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I0531 19:27:15.335703  135811 api_server.go:141] control plane version: v1.27.2
	I0531 19:27:15.335728  135811 api_server.go:131] duration metric: took 6.21133553s to wait for apiserver health ...
	I0531 19:27:15.335738  135811 cni.go:84] Creating CNI manager for ""
	I0531 19:27:15.335745  135811 cni.go:142] "docker" driver + "crio" runtime found, recommending kindnet
	I0531 19:27:15.337779  135811 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0531 19:27:15.339661  135811 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0531 19:27:15.348919  135811 cni.go:181] applying CNI manifest using /var/lib/minikube/binaries/v1.27.2/kubectl ...
	I0531 19:27:15.348937  135811 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0531 19:27:15.401707  135811 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0531 19:27:16.944477  135811 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.27.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.54273733s)
	I0531 19:27:16.944572  135811 system_pods.go:43] waiting for kube-system pods to appear ...
	I0531 19:27:16.954820  135811 system_pods.go:59] 7 kube-system pods found
	I0531 19:27:16.954902  135811 system_pods.go:61] "coredns-5d78c9869d-pkjvx" [684be7a1-9260-4d7d-afe4-22eba3383872] Running
	I0531 19:27:16.954932  135811 system_pods.go:61] "etcd-pause-142925" [62113c03-4eb3-4204-b5a1-3a827d4aebc3] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0531 19:27:16.954972  135811 system_pods.go:61] "kindnet-tj2db" [78b57b57-65bd-42d1-9c09-929951cdcb97] Running
	I0531 19:27:16.954998  135811 system_pods.go:61] "kube-apiserver-pause-142925" [1fe7f5b8-f674-4b80-84b9-05168e911372] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0531 19:27:16.955023  135811 system_pods.go:61] "kube-controller-manager-pause-142925" [418d12fe-7b0c-4df2-9a9f-0ee8d39ad60f] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0531 19:27:16.955044  135811 system_pods.go:61] "kube-proxy-hrhmq" [41ccce12-4ef2-49e2-9bbd-a664a715e971] Running
	I0531 19:27:16.955080  135811 system_pods.go:61] "kube-scheduler-pause-142925" [58ccb0c6-891d-4066-92b2-618238e7d456] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0531 19:27:16.955108  135811 system_pods.go:74] duration metric: took 10.512335ms to wait for pod list to return data ...
	I0531 19:27:16.955132  135811 node_conditions.go:102] verifying NodePressure condition ...
	I0531 19:27:16.976520  135811 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0531 19:27:16.976593  135811 node_conditions.go:123] node cpu capacity is 2
	I0531 19:27:16.976619  135811 node_conditions.go:105] duration metric: took 21.463931ms to run NodePressure ...
	I0531 19:27:16.976650  135811 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0531 19:27:17.315170  135811 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0531 19:27:17.321795  135811 kubeadm.go:787] kubelet initialised
	I0531 19:27:17.321864  135811 kubeadm.go:788] duration metric: took 6.635439ms waiting for restarted kubelet to initialise ...
	I0531 19:27:17.321886  135811 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0531 19:27:17.329124  135811 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5d78c9869d-pkjvx" in "kube-system" namespace to be "Ready" ...
	I0531 19:27:17.336988  135811 pod_ready.go:92] pod "coredns-5d78c9869d-pkjvx" in "kube-system" namespace has status "Ready":"True"
	I0531 19:27:17.337054  135811 pod_ready.go:81] duration metric: took 7.85651ms waiting for pod "coredns-5d78c9869d-pkjvx" in "kube-system" namespace to be "Ready" ...
	I0531 19:27:17.337079  135811 pod_ready.go:78] waiting up to 4m0s for pod "etcd-pause-142925" in "kube-system" namespace to be "Ready" ...
	I0531 19:27:19.352656  135811 pod_ready.go:102] pod "etcd-pause-142925" in "kube-system" namespace has status "Ready":"False"
	I0531 19:27:21.353358  135811 pod_ready.go:102] pod "etcd-pause-142925" in "kube-system" namespace has status "Ready":"False"
	I0531 19:27:21.854132  135811 pod_ready.go:92] pod "etcd-pause-142925" in "kube-system" namespace has status "Ready":"True"
	I0531 19:27:21.854162  135811 pod_ready.go:81] duration metric: took 4.51706143s waiting for pod "etcd-pause-142925" in "kube-system" namespace to be "Ready" ...
	I0531 19:27:21.854188  135811 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-pause-142925" in "kube-system" namespace to be "Ready" ...
	I0531 19:27:23.870602  135811 pod_ready.go:92] pod "kube-apiserver-pause-142925" in "kube-system" namespace has status "Ready":"True"
	I0531 19:27:23.870629  135811 pod_ready.go:81] duration metric: took 2.016429612s waiting for pod "kube-apiserver-pause-142925" in "kube-system" namespace to be "Ready" ...
	I0531 19:27:23.870641  135811 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-pause-142925" in "kube-system" namespace to be "Ready" ...
	I0531 19:27:25.389950  135811 pod_ready.go:92] pod "kube-controller-manager-pause-142925" in "kube-system" namespace has status "Ready":"True"
	I0531 19:27:25.389970  135811 pod_ready.go:81] duration metric: took 1.519322593s waiting for pod "kube-controller-manager-pause-142925" in "kube-system" namespace to be "Ready" ...
	I0531 19:27:25.389981  135811 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-hrhmq" in "kube-system" namespace to be "Ready" ...
	I0531 19:27:25.401819  135811 pod_ready.go:92] pod "kube-proxy-hrhmq" in "kube-system" namespace has status "Ready":"True"
	I0531 19:27:25.401884  135811 pod_ready.go:81] duration metric: took 11.887596ms waiting for pod "kube-proxy-hrhmq" in "kube-system" namespace to be "Ready" ...
	I0531 19:27:25.401908  135811 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-pause-142925" in "kube-system" namespace to be "Ready" ...
	I0531 19:27:25.918547  135811 pod_ready.go:92] pod "kube-scheduler-pause-142925" in "kube-system" namespace has status "Ready":"True"
	I0531 19:27:25.918568  135811 pod_ready.go:81] duration metric: took 516.640544ms waiting for pod "kube-scheduler-pause-142925" in "kube-system" namespace to be "Ready" ...
	I0531 19:27:25.918577  135811 pod_ready.go:38] duration metric: took 8.596668669s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0531 19:27:25.918593  135811 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0531 19:27:25.929160  135811 ops.go:34] apiserver oom_adj: -16
	I0531 19:27:25.929178  135811 kubeadm.go:640] restartCluster took 30.779224992s
	I0531 19:27:25.929186  135811 kubeadm.go:406] StartCluster complete in 30.916230702s
	I0531 19:27:25.929201  135811 settings.go:142] acquiring lock: {Name:mk7112454687e7bda5617b0aa762b583179f0f78 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 19:27:25.929255  135811 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/16569-2389/kubeconfig
	I0531 19:27:25.930230  135811 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16569-2389/kubeconfig: {Name:mk0c7b1a200a0a97aa7bf4307790fd99336ec425 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 19:27:25.934269  135811 kapi.go:59] client config for pause-142925: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/16569-2389/.minikube/profiles/pause-142925/client.crt", KeyFile:"/home/jenkins/minikube-integration/16569-2389/.minikube/profiles/pause-142925/client.key", CAFile:"/home/jenkins/minikube-integration/16569-2389/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(
nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x13dde10), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0531 19:27:25.936232  135811 addons.go:496] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false volumesnapshots:false]
	I0531 19:27:25.936365  135811 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0531 19:27:25.936632  135811 config.go:182] Loaded profile config "pause-142925": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.2
	I0531 19:27:25.952945  135811 out.go:177] * Enabled addons: 
	I0531 19:27:25.950460  135811 kapi.go:248] "coredns" deployment in "kube-system" namespace and "pause-142925" context rescaled to 1 replicas
	I0531 19:27:25.953098  135811 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0531 19:27:25.954776  135811 out.go:177] * Verifying Kubernetes components...
	I0531 19:27:25.956367  135811 addons.go:499] enable addons completed in 20.136338ms: enabled=[]
	I0531 19:27:25.958063  135811 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0531 19:27:26.124520  135811 start.go:889] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0531 19:27:26.124565  135811 node_ready.go:35] waiting up to 6m0s for node "pause-142925" to be "Ready" ...
	I0531 19:27:26.128586  135811 node_ready.go:49] node "pause-142925" has status "Ready":"True"
	I0531 19:27:26.128605  135811 node_ready.go:38] duration metric: took 4.026908ms waiting for node "pause-142925" to be "Ready" ...
	I0531 19:27:26.128613  135811 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0531 19:27:26.135699  135811 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5d78c9869d-pkjvx" in "kube-system" namespace to be "Ready" ...
	I0531 19:27:26.142552  135811 pod_ready.go:92] pod "coredns-5d78c9869d-pkjvx" in "kube-system" namespace has status "Ready":"True"
	I0531 19:27:26.142578  135811 pod_ready.go:81] duration metric: took 6.845799ms waiting for pod "coredns-5d78c9869d-pkjvx" in "kube-system" namespace to be "Ready" ...
	I0531 19:27:26.142591  135811 pod_ready.go:78] waiting up to 6m0s for pod "etcd-pause-142925" in "kube-system" namespace to be "Ready" ...
	I0531 19:27:26.267804  135811 pod_ready.go:92] pod "etcd-pause-142925" in "kube-system" namespace has status "Ready":"True"
	I0531 19:27:26.267831  135811 pod_ready.go:81] duration metric: took 125.231544ms waiting for pod "etcd-pause-142925" in "kube-system" namespace to be "Ready" ...
	I0531 19:27:26.267846  135811 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-pause-142925" in "kube-system" namespace to be "Ready" ...
	I0531 19:27:26.700233  135811 pod_ready.go:92] pod "kube-apiserver-pause-142925" in "kube-system" namespace has status "Ready":"True"
	I0531 19:27:26.700300  135811 pod_ready.go:81] duration metric: took 432.445838ms waiting for pod "kube-apiserver-pause-142925" in "kube-system" namespace to be "Ready" ...
	I0531 19:27:26.700327  135811 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-pause-142925" in "kube-system" namespace to be "Ready" ...
	I0531 19:27:27.068931  135811 pod_ready.go:92] pod "kube-controller-manager-pause-142925" in "kube-system" namespace has status "Ready":"True"
	I0531 19:27:27.068956  135811 pod_ready.go:81] duration metric: took 368.608831ms waiting for pod "kube-controller-manager-pause-142925" in "kube-system" namespace to be "Ready" ...
	I0531 19:27:27.068970  135811 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-hrhmq" in "kube-system" namespace to be "Ready" ...
	I0531 19:27:27.467287  135811 pod_ready.go:92] pod "kube-proxy-hrhmq" in "kube-system" namespace has status "Ready":"True"
	I0531 19:27:27.467309  135811 pod_ready.go:81] duration metric: took 398.331892ms waiting for pod "kube-proxy-hrhmq" in "kube-system" namespace to be "Ready" ...
	I0531 19:27:27.467323  135811 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-pause-142925" in "kube-system" namespace to be "Ready" ...
	I0531 19:27:27.867249  135811 pod_ready.go:92] pod "kube-scheduler-pause-142925" in "kube-system" namespace has status "Ready":"True"
	I0531 19:27:27.867268  135811 pod_ready.go:81] duration metric: took 399.938245ms waiting for pod "kube-scheduler-pause-142925" in "kube-system" namespace to be "Ready" ...
	I0531 19:27:27.867277  135811 pod_ready.go:38] duration metric: took 1.738654563s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0531 19:27:27.867292  135811 api_server.go:52] waiting for apiserver process to appear ...
	I0531 19:27:27.867347  135811 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 19:27:27.882436  135811 api_server.go:72] duration metric: took 1.929266731s to wait for apiserver process to appear ...
	I0531 19:27:27.882456  135811 api_server.go:88] waiting for apiserver healthz status ...
	I0531 19:27:27.882475  135811 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0531 19:27:27.892032  135811 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I0531 19:27:27.893580  135811 api_server.go:141] control plane version: v1.27.2
	I0531 19:27:27.893599  135811 api_server.go:131] duration metric: took 11.136038ms to wait for apiserver health ...
	I0531 19:27:27.893608  135811 system_pods.go:43] waiting for kube-system pods to appear ...
	I0531 19:27:28.071588  135811 system_pods.go:59] 7 kube-system pods found
	I0531 19:27:28.071665  135811 system_pods.go:61] "coredns-5d78c9869d-pkjvx" [684be7a1-9260-4d7d-afe4-22eba3383872] Running
	I0531 19:27:28.071686  135811 system_pods.go:61] "etcd-pause-142925" [62113c03-4eb3-4204-b5a1-3a827d4aebc3] Running
	I0531 19:27:28.071707  135811 system_pods.go:61] "kindnet-tj2db" [78b57b57-65bd-42d1-9c09-929951cdcb97] Running
	I0531 19:27:28.071748  135811 system_pods.go:61] "kube-apiserver-pause-142925" [1fe7f5b8-f674-4b80-84b9-05168e911372] Running
	I0531 19:27:28.071773  135811 system_pods.go:61] "kube-controller-manager-pause-142925" [418d12fe-7b0c-4df2-9a9f-0ee8d39ad60f] Running
	I0531 19:27:28.071797  135811 system_pods.go:61] "kube-proxy-hrhmq" [41ccce12-4ef2-49e2-9bbd-a664a715e971] Running
	I0531 19:27:28.071839  135811 system_pods.go:61] "kube-scheduler-pause-142925" [58ccb0c6-891d-4066-92b2-618238e7d456] Running
	I0531 19:27:28.071863  135811 system_pods.go:74] duration metric: took 178.249624ms to wait for pod list to return data ...
	I0531 19:27:28.071887  135811 default_sa.go:34] waiting for default service account to be created ...
	I0531 19:27:28.268788  135811 default_sa.go:45] found service account: "default"
	I0531 19:27:28.268812  135811 default_sa.go:55] duration metric: took 196.892481ms for default service account to be created ...
	I0531 19:27:28.268822  135811 system_pods.go:116] waiting for k8s-apps to be running ...
	I0531 19:27:28.473491  135811 system_pods.go:86] 7 kube-system pods found
	I0531 19:27:28.473525  135811 system_pods.go:89] "coredns-5d78c9869d-pkjvx" [684be7a1-9260-4d7d-afe4-22eba3383872] Running
	I0531 19:27:28.473533  135811 system_pods.go:89] "etcd-pause-142925" [62113c03-4eb3-4204-b5a1-3a827d4aebc3] Running
	I0531 19:27:28.473539  135811 system_pods.go:89] "kindnet-tj2db" [78b57b57-65bd-42d1-9c09-929951cdcb97] Running
	I0531 19:27:28.473544  135811 system_pods.go:89] "kube-apiserver-pause-142925" [1fe7f5b8-f674-4b80-84b9-05168e911372] Running
	I0531 19:27:28.473550  135811 system_pods.go:89] "kube-controller-manager-pause-142925" [418d12fe-7b0c-4df2-9a9f-0ee8d39ad60f] Running
	I0531 19:27:28.473773  135811 system_pods.go:89] "kube-proxy-hrhmq" [41ccce12-4ef2-49e2-9bbd-a664a715e971] Running
	I0531 19:27:28.473787  135811 system_pods.go:89] "kube-scheduler-pause-142925" [58ccb0c6-891d-4066-92b2-618238e7d456] Running
	I0531 19:27:28.473794  135811 system_pods.go:126] duration metric: took 204.96804ms to wait for k8s-apps to be running ...
	I0531 19:27:28.473802  135811 system_svc.go:44] waiting for kubelet service to be running ....
	I0531 19:27:28.473872  135811 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0531 19:27:28.512619  135811 system_svc.go:56] duration metric: took 38.806203ms WaitForService to wait for kubelet.
	I0531 19:27:28.512654  135811 kubeadm.go:581] duration metric: took 2.559490256s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0531 19:27:28.512673  135811 node_conditions.go:102] verifying NodePressure condition ...
	I0531 19:27:28.667947  135811 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0531 19:27:28.667972  135811 node_conditions.go:123] node cpu capacity is 2
	I0531 19:27:28.667983  135811 node_conditions.go:105] duration metric: took 155.304761ms to run NodePressure ...
	I0531 19:27:28.667993  135811 start.go:228] waiting for startup goroutines ...
	I0531 19:27:28.668000  135811 start.go:233] waiting for cluster config update ...
	I0531 19:27:28.668008  135811 start.go:242] writing updated cluster config ...
	I0531 19:27:28.668314  135811 ssh_runner.go:195] Run: rm -f paused
	I0531 19:27:28.764990  135811 start.go:573] kubectl: 1.27.2, cluster: 1.27.2 (minor skew: 0)
	I0531 19:27:28.768941  135811 out.go:177] * Done! kubectl is now configured to use "pause-142925" cluster and "default" namespace by default

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect pause-142925
helpers_test.go:235: (dbg) docker inspect pause-142925:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "621845fcc6fd2e5cff9530c4e999c0faacc75443d3b2bab32de290cb4e7e8f79",
	        "Created": "2023-05-31T19:25:33.629379075Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 131258,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-05-31T19:25:34.004434783Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:dee4774de4f99268e16c76379a36a4607bed47635a069a7e60c17cd24d9aaa76",
	        "ResolvConfPath": "/var/lib/docker/containers/621845fcc6fd2e5cff9530c4e999c0faacc75443d3b2bab32de290cb4e7e8f79/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/621845fcc6fd2e5cff9530c4e999c0faacc75443d3b2bab32de290cb4e7e8f79/hostname",
	        "HostsPath": "/var/lib/docker/containers/621845fcc6fd2e5cff9530c4e999c0faacc75443d3b2bab32de290cb4e7e8f79/hosts",
	        "LogPath": "/var/lib/docker/containers/621845fcc6fd2e5cff9530c4e999c0faacc75443d3b2bab32de290cb4e7e8f79/621845fcc6fd2e5cff9530c4e999c0faacc75443d3b2bab32de290cb4e7e8f79-json.log",
	        "Name": "/pause-142925",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-142925:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "pause-142925",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2147483648,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4294967296,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/a40260018971da61143bd652a5a26eab3ad58faf3058c1b2351592bd939bf784-init/diff:/var/lib/docker/overlay2/548bced7e749d102323bab71db162b075785f916e2a896d29f3adc2c3d7fbea8/diff",
	                "MergedDir": "/var/lib/docker/overlay2/a40260018971da61143bd652a5a26eab3ad58faf3058c1b2351592bd939bf784/merged",
	                "UpperDir": "/var/lib/docker/overlay2/a40260018971da61143bd652a5a26eab3ad58faf3058c1b2351592bd939bf784/diff",
	                "WorkDir": "/var/lib/docker/overlay2/a40260018971da61143bd652a5a26eab3ad58faf3058c1b2351592bd939bf784/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-142925",
	                "Source": "/var/lib/docker/volumes/pause-142925/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-142925",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-142925",
	                "name.minikube.sigs.k8s.io": "pause-142925",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "e504dbe60eba776011d04c15569218a453d7b933c958c8b12d242c6d073d5d26",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32974"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32973"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32970"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32972"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32971"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/e504dbe60eba",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-142925": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "621845fcc6fd",
	                        "pause-142925"
	                    ],
	                    "NetworkID": "0d663842a08da493e682e79b1106b90ddf6db82405fe988e860d90e528e3e11f",
	                    "EndpointID": "f5be19752cd229708b2c335f8917935a22e19a4b172c43d62fe9db4304c36031",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p pause-142925 -n pause-142925
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p pause-142925 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p pause-142925 logs -n 25: (2.30414329s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |           Profile           |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p insufficient-storage-158364 | insufficient-storage-158364 | jenkins | v1.30.1 | 31 May 23 19:19 UTC | 31 May 23 19:19 UTC |
	| start   | -p NoKubernetes-969645         | NoKubernetes-969645         | jenkins | v1.30.1 | 31 May 23 19:19 UTC |                     |
	|         | --no-kubernetes                |                             |         |         |                     |                     |
	|         | --kubernetes-version=1.20      |                             |         |         |                     |                     |
	|         | --driver=docker                |                             |         |         |                     |                     |
	|         | --container-runtime=crio       |                             |         |         |                     |                     |
	| start   | -p NoKubernetes-969645         | NoKubernetes-969645         | jenkins | v1.30.1 | 31 May 23 19:19 UTC | 31 May 23 19:20 UTC |
	|         | --driver=docker                |                             |         |         |                     |                     |
	|         | --container-runtime=crio       |                             |         |         |                     |                     |
	| start   | -p NoKubernetes-969645         | NoKubernetes-969645         | jenkins | v1.30.1 | 31 May 23 19:20 UTC | 31 May 23 19:20 UTC |
	|         | --no-kubernetes                |                             |         |         |                     |                     |
	|         | --driver=docker                |                             |         |         |                     |                     |
	|         | --container-runtime=crio       |                             |         |         |                     |                     |
	| delete  | -p NoKubernetes-969645         | NoKubernetes-969645         | jenkins | v1.30.1 | 31 May 23 19:20 UTC | 31 May 23 19:20 UTC |
	| start   | -p NoKubernetes-969645         | NoKubernetes-969645         | jenkins | v1.30.1 | 31 May 23 19:20 UTC | 31 May 23 19:20 UTC |
	|         | --no-kubernetes                |                             |         |         |                     |                     |
	|         | --driver=docker                |                             |         |         |                     |                     |
	|         | --container-runtime=crio       |                             |         |         |                     |                     |
	| ssh     | -p NoKubernetes-969645 sudo    | NoKubernetes-969645         | jenkins | v1.30.1 | 31 May 23 19:20 UTC |                     |
	|         | systemctl is-active --quiet    |                             |         |         |                     |                     |
	|         | service kubelet                |                             |         |         |                     |                     |
	| stop    | -p NoKubernetes-969645         | NoKubernetes-969645         | jenkins | v1.30.1 | 31 May 23 19:20 UTC | 31 May 23 19:20 UTC |
	| start   | -p NoKubernetes-969645         | NoKubernetes-969645         | jenkins | v1.30.1 | 31 May 23 19:20 UTC | 31 May 23 19:20 UTC |
	|         | --driver=docker                |                             |         |         |                     |                     |
	|         | --container-runtime=crio       |                             |         |         |                     |                     |
	| ssh     | -p NoKubernetes-969645 sudo    | NoKubernetes-969645         | jenkins | v1.30.1 | 31 May 23 19:20 UTC |                     |
	|         | systemctl is-active --quiet    |                             |         |         |                     |                     |
	|         | service kubelet                |                             |         |         |                     |                     |
	| delete  | -p NoKubernetes-969645         | NoKubernetes-969645         | jenkins | v1.30.1 | 31 May 23 19:20 UTC | 31 May 23 19:21 UTC |
	| start   | -p kubernetes-upgrade-843072   | kubernetes-upgrade-843072   | jenkins | v1.30.1 | 31 May 23 19:21 UTC | 31 May 23 19:22 UTC |
	|         | --memory=2200                  |                             |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0   |                             |         |         |                     |                     |
	|         | --alsologtostderr              |                             |         |         |                     |                     |
	|         | -v=1 --driver=docker           |                             |         |         |                     |                     |
	|         | --container-runtime=crio       |                             |         |         |                     |                     |
	| delete  | -p missing-upgrade-915836      | missing-upgrade-915836      | jenkins | v1.30.1 | 31 May 23 19:21 UTC | 31 May 23 19:21 UTC |
	| stop    | -p kubernetes-upgrade-843072   | kubernetes-upgrade-843072   | jenkins | v1.30.1 | 31 May 23 19:22 UTC | 31 May 23 19:22 UTC |
	| start   | -p kubernetes-upgrade-843072   | kubernetes-upgrade-843072   | jenkins | v1.30.1 | 31 May 23 19:22 UTC | 31 May 23 19:26 UTC |
	|         | --memory=2200                  |                             |         |         |                     |                     |
	|         | --kubernetes-version=v1.27.2   |                             |         |         |                     |                     |
	|         | --alsologtostderr              |                             |         |         |                     |                     |
	|         | -v=1 --driver=docker           |                             |         |         |                     |                     |
	|         | --container-runtime=crio       |                             |         |         |                     |                     |
	| start   | -p stopped-upgrade-577066      | stopped-upgrade-577066      | jenkins | v1.30.1 | 31 May 23 19:23 UTC |                     |
	|         | --memory=2200                  |                             |         |         |                     |                     |
	|         | --alsologtostderr              |                             |         |         |                     |                     |
	|         | -v=1 --driver=docker           |                             |         |         |                     |                     |
	|         | --container-runtime=crio       |                             |         |         |                     |                     |
	| delete  | -p stopped-upgrade-577066      | stopped-upgrade-577066      | jenkins | v1.30.1 | 31 May 23 19:24 UTC | 31 May 23 19:24 UTC |
	| start   | -p running-upgrade-862679      | running-upgrade-862679      | jenkins | v1.30.1 | 31 May 23 19:25 UTC |                     |
	|         | --memory=2200                  |                             |         |         |                     |                     |
	|         | --alsologtostderr              |                             |         |         |                     |                     |
	|         | -v=1 --driver=docker           |                             |         |         |                     |                     |
	|         | --container-runtime=crio       |                             |         |         |                     |                     |
	| delete  | -p running-upgrade-862679      | running-upgrade-862679      | jenkins | v1.30.1 | 31 May 23 19:25 UTC | 31 May 23 19:25 UTC |
	| start   | -p pause-142925 --memory=2048  | pause-142925                | jenkins | v1.30.1 | 31 May 23 19:25 UTC | 31 May 23 19:26 UTC |
	|         | --install-addons=false         |                             |         |         |                     |                     |
	|         | --wait=all --driver=docker     |                             |         |         |                     |                     |
	|         | --container-runtime=crio       |                             |         |         |                     |                     |
	| start   | -p pause-142925                | pause-142925                | jenkins | v1.30.1 | 31 May 23 19:26 UTC | 31 May 23 19:27 UTC |
	|         | --alsologtostderr              |                             |         |         |                     |                     |
	|         | -v=1 --driver=docker           |                             |         |         |                     |                     |
	|         | --container-runtime=crio       |                             |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-843072   | kubernetes-upgrade-843072   | jenkins | v1.30.1 | 31 May 23 19:26 UTC |                     |
	|         | --memory=2200                  |                             |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0   |                             |         |         |                     |                     |
	|         | --driver=docker                |                             |         |         |                     |                     |
	|         | --container-runtime=crio       |                             |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-843072   | kubernetes-upgrade-843072   | jenkins | v1.30.1 | 31 May 23 19:26 UTC | 31 May 23 19:27 UTC |
	|         | --memory=2200                  |                             |         |         |                     |                     |
	|         | --kubernetes-version=v1.27.2   |                             |         |         |                     |                     |
	|         | --alsologtostderr              |                             |         |         |                     |                     |
	|         | -v=1 --driver=docker           |                             |         |         |                     |                     |
	|         | --container-runtime=crio       |                             |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-843072   | kubernetes-upgrade-843072   | jenkins | v1.30.1 | 31 May 23 19:27 UTC | 31 May 23 19:27 UTC |
	| start   | -p force-systemd-flag-124615   | force-systemd-flag-124615   | jenkins | v1.30.1 | 31 May 23 19:27 UTC |                     |
	|         | --memory=2048 --force-systemd  |                             |         |         |                     |                     |
	|         | --alsologtostderr              |                             |         |         |                     |                     |
	|         | -v=5 --driver=docker           |                             |         |         |                     |                     |
	|         | --container-runtime=crio       |                             |         |         |                     |                     |
	|---------|--------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/05/31 19:27:27
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.20.4 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0531 19:27:27.796806  139599 out.go:296] Setting OutFile to fd 1 ...
	I0531 19:27:27.796965  139599 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0531 19:27:27.796975  139599 out.go:309] Setting ErrFile to fd 2...
	I0531 19:27:27.796981  139599 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0531 19:27:27.797137  139599 root.go:336] Updating PATH: /home/jenkins/minikube-integration/16569-2389/.minikube/bin
	I0531 19:27:27.797532  139599 out.go:303] Setting JSON to false
	I0531 19:27:27.798671  139599 start.go:127] hostinfo: {"hostname":"ip-172-31-21-244","uptime":4193,"bootTime":1685557055,"procs":374,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1037-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0531 19:27:27.798780  139599 start.go:137] virtualization:  
	I0531 19:27:27.801240  139599 out.go:177] * [force-systemd-flag-124615] minikube v1.30.1 on Ubuntu 20.04 (arm64)
	I0531 19:27:27.803460  139599 out.go:177]   - MINIKUBE_LOCATION=16569
	I0531 19:27:27.805234  139599 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0531 19:27:27.803664  139599 notify.go:220] Checking for updates...
	I0531 19:27:27.808952  139599 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/16569-2389/kubeconfig
	I0531 19:27:27.810794  139599 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/16569-2389/.minikube
	I0531 19:27:27.812629  139599 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0531 19:27:27.814340  139599 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0531 19:27:25.958063  135811 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0531 19:27:26.124520  135811 start.go:889] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0531 19:27:26.124565  135811 node_ready.go:35] waiting up to 6m0s for node "pause-142925" to be "Ready" ...
	I0531 19:27:26.128586  135811 node_ready.go:49] node "pause-142925" has status "Ready":"True"
	I0531 19:27:26.128605  135811 node_ready.go:38] duration metric: took 4.026908ms waiting for node "pause-142925" to be "Ready" ...
	I0531 19:27:26.128613  135811 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0531 19:27:26.135699  135811 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5d78c9869d-pkjvx" in "kube-system" namespace to be "Ready" ...
	I0531 19:27:26.142552  135811 pod_ready.go:92] pod "coredns-5d78c9869d-pkjvx" in "kube-system" namespace has status "Ready":"True"
	I0531 19:27:26.142578  135811 pod_ready.go:81] duration metric: took 6.845799ms waiting for pod "coredns-5d78c9869d-pkjvx" in "kube-system" namespace to be "Ready" ...
	I0531 19:27:26.142591  135811 pod_ready.go:78] waiting up to 6m0s for pod "etcd-pause-142925" in "kube-system" namespace to be "Ready" ...
	I0531 19:27:26.267804  135811 pod_ready.go:92] pod "etcd-pause-142925" in "kube-system" namespace has status "Ready":"True"
	I0531 19:27:26.267831  135811 pod_ready.go:81] duration metric: took 125.231544ms waiting for pod "etcd-pause-142925" in "kube-system" namespace to be "Ready" ...
	I0531 19:27:26.267846  135811 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-pause-142925" in "kube-system" namespace to be "Ready" ...
	I0531 19:27:26.700233  135811 pod_ready.go:92] pod "kube-apiserver-pause-142925" in "kube-system" namespace has status "Ready":"True"
	I0531 19:27:26.700300  135811 pod_ready.go:81] duration metric: took 432.445838ms waiting for pod "kube-apiserver-pause-142925" in "kube-system" namespace to be "Ready" ...
	I0531 19:27:26.700327  135811 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-pause-142925" in "kube-system" namespace to be "Ready" ...
	I0531 19:27:27.068931  135811 pod_ready.go:92] pod "kube-controller-manager-pause-142925" in "kube-system" namespace has status "Ready":"True"
	I0531 19:27:27.068956  135811 pod_ready.go:81] duration metric: took 368.608831ms waiting for pod "kube-controller-manager-pause-142925" in "kube-system" namespace to be "Ready" ...
	I0531 19:27:27.068970  135811 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-hrhmq" in "kube-system" namespace to be "Ready" ...
	I0531 19:27:27.467287  135811 pod_ready.go:92] pod "kube-proxy-hrhmq" in "kube-system" namespace has status "Ready":"True"
	I0531 19:27:27.467309  135811 pod_ready.go:81] duration metric: took 398.331892ms waiting for pod "kube-proxy-hrhmq" in "kube-system" namespace to be "Ready" ...
	I0531 19:27:27.467323  135811 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-pause-142925" in "kube-system" namespace to be "Ready" ...
	I0531 19:27:27.817055  139599 config.go:182] Loaded profile config "pause-142925": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.2
	I0531 19:27:27.817255  139599 driver.go:375] Setting default libvirt URI to qemu:///system
	I0531 19:27:27.848938  139599 docker.go:121] docker version: linux-24.0.2:Docker Engine - Community
	I0531 19:27:27.849042  139599 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0531 19:27:27.951657  139599 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:6 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:45 SystemTime:2023-05-31 19:27:27.939156728 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1037-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215166976 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.5] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.18.1]] Warnings:<nil>}}
	I0531 19:27:27.951767  139599 docker.go:294] overlay module found
	I0531 19:27:27.953551  139599 out.go:177] * Using the docker driver based on user configuration
	I0531 19:27:27.955349  139599 start.go:297] selected driver: docker
	I0531 19:27:27.955368  139599 start.go:875] validating driver "docker" against <nil>
	I0531 19:27:27.955380  139599 start.go:886] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0531 19:27:27.956032  139599 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0531 19:27:28.023088  139599 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:6 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:45 SystemTime:2023-05-31 19:27:28.012120894 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1037-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215166976 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.5] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.18.1]] Warnings:<nil>}}
	I0531 19:27:28.023251  139599 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0531 19:27:28.023458  139599 start_flags.go:901] Wait components to verify : map[apiserver:true system_pods:true]
	I0531 19:27:28.025730  139599 out.go:177] * Using Docker driver with root privileges
	I0531 19:27:28.027633  139599 cni.go:84] Creating CNI manager for ""
	I0531 19:27:28.027657  139599 cni.go:142] "docker" driver + "crio" runtime found, recommending kindnet
	I0531 19:27:28.027667  139599 start_flags.go:314] Found "CNI" CNI - setting NetworkPlugin=cni
	I0531 19:27:28.027684  139599 start_flags.go:319] config:
	{Name:force-systemd-flag-124615 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:force-systemd-flag-124615 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0531 19:27:28.029691  139599 out.go:177] * Starting control plane node force-systemd-flag-124615 in cluster force-systemd-flag-124615
	I0531 19:27:28.031166  139599 cache.go:122] Beginning downloading kic base image for docker with crio
	I0531 19:27:28.032937  139599 out.go:177] * Pulling base image ...
	I0531 19:27:28.034646  139599 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime crio
	I0531 19:27:28.034698  139599 preload.go:148] Found local preload: /home/jenkins/minikube-integration/16569-2389/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-cri-o-overlay-arm64.tar.lz4
	I0531 19:27:28.034716  139599 cache.go:57] Caching tarball of preloaded images
	I0531 19:27:28.034798  139599 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 in local docker daemon
	I0531 19:27:28.034857  139599 preload.go:174] Found /home/jenkins/minikube-integration/16569-2389/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0531 19:27:28.034866  139599 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.2 on crio
	I0531 19:27:28.034981  139599 profile.go:148] Saving config to /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/force-systemd-flag-124615/config.json ...
	I0531 19:27:28.034998  139599 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/force-systemd-flag-124615/config.json: {Name:mkb5a567050c4d73e10be27c3cca047191345962 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 19:27:28.053029  139599 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 in local docker daemon, skipping pull
	I0531 19:27:28.053053  139599 cache.go:145] gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 exists in daemon, skipping load
	I0531 19:27:28.053075  139599 cache.go:195] Successfully downloaded all kic artifacts
	I0531 19:27:28.053121  139599 start.go:364] acquiring machines lock for force-systemd-flag-124615: {Name:mk87f044452f522f924326d91066ec90e6b98fcf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0531 19:27:28.053242  139599 start.go:368] acquired machines lock for "force-systemd-flag-124615" in 104.548µs
	I0531 19:27:28.053268  139599 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-124615 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:force-systemd-flag-124615 Namespace:default AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnet
ClientPath: SocketVMnetPath: StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0531 19:27:28.053348  139599 start.go:125] createHost starting for "" (driver="docker")
	I0531 19:27:27.867249  135811 pod_ready.go:92] pod "kube-scheduler-pause-142925" in "kube-system" namespace has status "Ready":"True"
	I0531 19:27:27.867268  135811 pod_ready.go:81] duration metric: took 399.938245ms waiting for pod "kube-scheduler-pause-142925" in "kube-system" namespace to be "Ready" ...
	I0531 19:27:27.867277  135811 pod_ready.go:38] duration metric: took 1.738654563s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0531 19:27:27.867292  135811 api_server.go:52] waiting for apiserver process to appear ...
	I0531 19:27:27.867347  135811 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 19:27:27.882436  135811 api_server.go:72] duration metric: took 1.929266731s to wait for apiserver process to appear ...
	I0531 19:27:27.882456  135811 api_server.go:88] waiting for apiserver healthz status ...
	I0531 19:27:27.882475  135811 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0531 19:27:27.892032  135811 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I0531 19:27:27.893580  135811 api_server.go:141] control plane version: v1.27.2
	I0531 19:27:27.893599  135811 api_server.go:131] duration metric: took 11.136038ms to wait for apiserver health ...
	I0531 19:27:27.893608  135811 system_pods.go:43] waiting for kube-system pods to appear ...
	I0531 19:27:28.071588  135811 system_pods.go:59] 7 kube-system pods found
	I0531 19:27:28.071665  135811 system_pods.go:61] "coredns-5d78c9869d-pkjvx" [684be7a1-9260-4d7d-afe4-22eba3383872] Running
	I0531 19:27:28.071686  135811 system_pods.go:61] "etcd-pause-142925" [62113c03-4eb3-4204-b5a1-3a827d4aebc3] Running
	I0531 19:27:28.071707  135811 system_pods.go:61] "kindnet-tj2db" [78b57b57-65bd-42d1-9c09-929951cdcb97] Running
	I0531 19:27:28.071748  135811 system_pods.go:61] "kube-apiserver-pause-142925" [1fe7f5b8-f674-4b80-84b9-05168e911372] Running
	I0531 19:27:28.071773  135811 system_pods.go:61] "kube-controller-manager-pause-142925" [418d12fe-7b0c-4df2-9a9f-0ee8d39ad60f] Running
	I0531 19:27:28.071797  135811 system_pods.go:61] "kube-proxy-hrhmq" [41ccce12-4ef2-49e2-9bbd-a664a715e971] Running
	I0531 19:27:28.071839  135811 system_pods.go:61] "kube-scheduler-pause-142925" [58ccb0c6-891d-4066-92b2-618238e7d456] Running
	I0531 19:27:28.071863  135811 system_pods.go:74] duration metric: took 178.249624ms to wait for pod list to return data ...
	I0531 19:27:28.071887  135811 default_sa.go:34] waiting for default service account to be created ...
	I0531 19:27:28.268788  135811 default_sa.go:45] found service account: "default"
	I0531 19:27:28.268812  135811 default_sa.go:55] duration metric: took 196.892481ms for default service account to be created ...
	I0531 19:27:28.268822  135811 system_pods.go:116] waiting for k8s-apps to be running ...
	I0531 19:27:28.473491  135811 system_pods.go:86] 7 kube-system pods found
	I0531 19:27:28.473525  135811 system_pods.go:89] "coredns-5d78c9869d-pkjvx" [684be7a1-9260-4d7d-afe4-22eba3383872] Running
	I0531 19:27:28.473533  135811 system_pods.go:89] "etcd-pause-142925" [62113c03-4eb3-4204-b5a1-3a827d4aebc3] Running
	I0531 19:27:28.473539  135811 system_pods.go:89] "kindnet-tj2db" [78b57b57-65bd-42d1-9c09-929951cdcb97] Running
	I0531 19:27:28.473544  135811 system_pods.go:89] "kube-apiserver-pause-142925" [1fe7f5b8-f674-4b80-84b9-05168e911372] Running
	I0531 19:27:28.473550  135811 system_pods.go:89] "kube-controller-manager-pause-142925" [418d12fe-7b0c-4df2-9a9f-0ee8d39ad60f] Running
	I0531 19:27:28.473773  135811 system_pods.go:89] "kube-proxy-hrhmq" [41ccce12-4ef2-49e2-9bbd-a664a715e971] Running
	I0531 19:27:28.473787  135811 system_pods.go:89] "kube-scheduler-pause-142925" [58ccb0c6-891d-4066-92b2-618238e7d456] Running
	I0531 19:27:28.473794  135811 system_pods.go:126] duration metric: took 204.96804ms to wait for k8s-apps to be running ...
	I0531 19:27:28.473802  135811 system_svc.go:44] waiting for kubelet service to be running ....
	I0531 19:27:28.473872  135811 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0531 19:27:28.512619  135811 system_svc.go:56] duration metric: took 38.806203ms WaitForService to wait for kubelet.
	I0531 19:27:28.512654  135811 kubeadm.go:581] duration metric: took 2.559490256s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0531 19:27:28.512673  135811 node_conditions.go:102] verifying NodePressure condition ...
	I0531 19:27:28.667947  135811 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0531 19:27:28.667972  135811 node_conditions.go:123] node cpu capacity is 2
	I0531 19:27:28.667983  135811 node_conditions.go:105] duration metric: took 155.304761ms to run NodePressure ...
	I0531 19:27:28.667993  135811 start.go:228] waiting for startup goroutines ...
	I0531 19:27:28.668000  135811 start.go:233] waiting for cluster config update ...
	I0531 19:27:28.668008  135811 start.go:242] writing updated cluster config ...
	I0531 19:27:28.668314  135811 ssh_runner.go:195] Run: rm -f paused
	I0531 19:27:28.764990  135811 start.go:573] kubectl: 1.27.2, cluster: 1.27.2 (minor skew: 0)
	I0531 19:27:28.768941  135811 out.go:177] * Done! kubectl is now configured to use "pause-142925" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* May 31 19:27:14 pause-142925 crio[2567]: time="2023-05-31 19:27:14.643473439Z" level=info msg="Created container c160494d0bf8307482da0f4b11fa31b2602ac4c68688feea87f60330bab849a1: kube-system/coredns-5d78c9869d-pkjvx/coredns" id=8a45e786-c471-4a04-997a-7c5f0659e15f name=/runtime.v1.RuntimeService/CreateContainer
	May 31 19:27:14 pause-142925 crio[2567]: time="2023-05-31 19:27:14.643972522Z" level=info msg="Starting container: c160494d0bf8307482da0f4b11fa31b2602ac4c68688feea87f60330bab849a1" id=40b9bd52-71fd-4eae-a072-14b733239eef name=/runtime.v1.RuntimeService/StartContainer
	May 31 19:27:14 pause-142925 crio[2567]: time="2023-05-31 19:27:14.654947723Z" level=info msg="Started container" PID=3331 containerID=9273f7b7e15106f96991156699b30fa260e61482461e1588686d29e389333d78 description=kube-system/kindnet-tj2db/kindnet-cni id=78610f2a-91df-4cb0-a1e6-b62aff3b7cba name=/runtime.v1.RuntimeService/StartContainer sandboxID=452b7cd4d17bb888e8f8a8503975bd1a5a3ba14adddf5383d6de894a0413f28f
	May 31 19:27:14 pause-142925 crio[2567]: time="2023-05-31 19:27:14.673942984Z" level=info msg="Started container" PID=3355 containerID=c160494d0bf8307482da0f4b11fa31b2602ac4c68688feea87f60330bab849a1 description=kube-system/coredns-5d78c9869d-pkjvx/coredns id=40b9bd52-71fd-4eae-a072-14b733239eef name=/runtime.v1.RuntimeService/StartContainer sandboxID=16acc94ea4916f4eff611859730cd5bf5e8c05be2421a59e6e744d4040bb2529
	May 31 19:27:14 pause-142925 crio[2567]: time="2023-05-31 19:27:14.885806295Z" level=info msg="Created container c11065f5bc5427833241713211d2b26af3f39670c636150e5f6372db8a1aa6eb: kube-system/kube-proxy-hrhmq/kube-proxy" id=eb48e716-f355-484e-ba8c-afb0fc8b4546 name=/runtime.v1.RuntimeService/CreateContainer
	May 31 19:27:14 pause-142925 crio[2567]: time="2023-05-31 19:27:14.886493116Z" level=info msg="Starting container: c11065f5bc5427833241713211d2b26af3f39670c636150e5f6372db8a1aa6eb" id=fd5bceef-2977-4131-a346-c9359da36650 name=/runtime.v1.RuntimeService/StartContainer
	May 31 19:27:15 pause-142925 crio[2567]: time="2023-05-31 19:27:15.040400023Z" level=info msg="Started container" PID=3348 containerID=c11065f5bc5427833241713211d2b26af3f39670c636150e5f6372db8a1aa6eb description=kube-system/kube-proxy-hrhmq/kube-proxy id=fd5bceef-2977-4131-a346-c9359da36650 name=/runtime.v1.RuntimeService/StartContainer sandboxID=fb743709b673303bc91dca669507e1c2b76b82c343d4720bebf2c016c3cb1f79
	May 31 19:27:15 pause-142925 crio[2567]: time="2023-05-31 19:27:15.178972539Z" level=info msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist.temp\": CREATE"
	May 31 19:27:15 pause-142925 crio[2567]: time="2023-05-31 19:27:15.204564774Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	May 31 19:27:15 pause-142925 crio[2567]: time="2023-05-31 19:27:15.204601450Z" level=info msg="Updated default CNI network name to kindnet"
	May 31 19:27:15 pause-142925 crio[2567]: time="2023-05-31 19:27:15.204618959Z" level=info msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist.temp\": WRITE"
	May 31 19:27:15 pause-142925 crio[2567]: time="2023-05-31 19:27:15.216199318Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	May 31 19:27:15 pause-142925 crio[2567]: time="2023-05-31 19:27:15.216235748Z" level=info msg="Updated default CNI network name to kindnet"
	May 31 19:27:15 pause-142925 crio[2567]: time="2023-05-31 19:27:15.228531219Z" level=info msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist.temp\": WRITE"
	May 31 19:27:15 pause-142925 crio[2567]: time="2023-05-31 19:27:15.265009704Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	May 31 19:27:15 pause-142925 crio[2567]: time="2023-05-31 19:27:15.265044895Z" level=info msg="Updated default CNI network name to kindnet"
	May 31 19:27:15 pause-142925 crio[2567]: time="2023-05-31 19:27:15.265061420Z" level=info msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist.temp\": WRITE"
	May 31 19:27:15 pause-142925 crio[2567]: time="2023-05-31 19:27:15.290049310Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	May 31 19:27:15 pause-142925 crio[2567]: time="2023-05-31 19:27:15.290091409Z" level=info msg="Updated default CNI network name to kindnet"
	May 31 19:27:15 pause-142925 crio[2567]: time="2023-05-31 19:27:15.290113308Z" level=info msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist.temp\": RENAME"
	May 31 19:27:15 pause-142925 crio[2567]: time="2023-05-31 19:27:15.305852674Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	May 31 19:27:15 pause-142925 crio[2567]: time="2023-05-31 19:27:15.305893445Z" level=info msg="Updated default CNI network name to kindnet"
	May 31 19:27:15 pause-142925 crio[2567]: time="2023-05-31 19:27:15.305911323Z" level=info msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist\": CREATE"
	May 31 19:27:15 pause-142925 crio[2567]: time="2023-05-31 19:27:15.312942061Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	May 31 19:27:15 pause-142925 crio[2567]: time="2023-05-31 19:27:15.312996525Z" level=info msg="Updated default CNI network name to kindnet"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	c11065f5bc542       29921a084542255eb81a1a660a603b1a24636d88b202f9010daae75fa32754c0   16 seconds ago      Running             kube-proxy                2                   fb743709b6733       kube-proxy-hrhmq
	c160494d0bf83       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108   16 seconds ago      Running             coredns                   2                   16acc94ea4916       coredns-5d78c9869d-pkjvx
	9273f7b7e1510       b18bf71b941bae2e12db1c07e567ad14e4febbc778310a0fc64487f1ac877d79   16 seconds ago      Running             kindnet-cni               2                   452b7cd4d17bb       kindnet-tj2db
	f34040e109e2a       2ee705380c3c59a538b853cbe9ae9886ebbd0001a4cea4add5adeea48e5f48d4   21 seconds ago      Running             kube-controller-manager   2                   a91695c8fa141       kube-controller-manager-pause-142925
	85bfc1cda7f0b       72c9df6be7f1b997e4a31b5cb9aa7262e5278905af97e6a69e341e3f0f9bbaae   21 seconds ago      Running             kube-apiserver            2                   c3149f0c18fe1       kube-apiserver-pause-142925
	1a52edf685c10       24bc64e911039ecf00e263be2161797c758b7d82403ca5516ab64047a477f737   21 seconds ago      Running             etcd                      2                   178cab6c62ccf       etcd-pause-142925
	c659c1625e379       305d7ed1dae2877c3a80d434c5fb9f1aac1aa3d2431c36130a3fcd1970e93840   21 seconds ago      Running             kube-scheduler            3                   4c08723eca1fd       kube-scheduler-pause-142925
	b7eeb9e9931e5       305d7ed1dae2877c3a80d434c5fb9f1aac1aa3d2431c36130a3fcd1970e93840   27 seconds ago      Exited              kube-scheduler            2                   4c08723eca1fd       kube-scheduler-pause-142925
	53d9487d7e1b5       72c9df6be7f1b997e4a31b5cb9aa7262e5278905af97e6a69e341e3f0f9bbaae   38 seconds ago      Exited              kube-apiserver            1                   c3149f0c18fe1       kube-apiserver-pause-142925
	ebefff5a4557a       b18bf71b941bae2e12db1c07e567ad14e4febbc778310a0fc64487f1ac877d79   38 seconds ago      Exited              kindnet-cni               1                   452b7cd4d17bb       kindnet-tj2db
	aecfb452ce5e3       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108   38 seconds ago      Exited              coredns                   1                   16acc94ea4916       coredns-5d78c9869d-pkjvx
	478eaf89680be       29921a084542255eb81a1a660a603b1a24636d88b202f9010daae75fa32754c0   38 seconds ago      Exited              kube-proxy                1                   fb743709b6733       kube-proxy-hrhmq
	7ee1670fa5d41       24bc64e911039ecf00e263be2161797c758b7d82403ca5516ab64047a477f737   38 seconds ago      Exited              etcd                      1                   178cab6c62ccf       etcd-pause-142925
	ccff2afb9f0d7       2ee705380c3c59a538b853cbe9ae9886ebbd0001a4cea4add5adeea48e5f48d4   38 seconds ago      Exited              kube-controller-manager   1                   a91695c8fa141       kube-controller-manager-pause-142925
	
	* 
	* ==> coredns [aecfb452ce5e36e1f9750b8b696e43d37fe95157384a205ac6924bc948c6c429] <==
	* 
	* 
	* ==> coredns [c160494d0bf8307482da0f4b11fa31b2602ac4c68688feea87f60330bab849a1] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = b7aacdf6a6aa730aafe4d018cac9b7b5ecfb346cba84a99f64521f87aef8b4958639c1cf97967716465791d05bd38f372615327b7cb1d93c850bae532744d54d
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:60093 - 43985 "HINFO IN 5462344948435926723.8506925124705925349. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.01730594s
	
	* 
	* ==> describe nodes <==
	* Name:               pause-142925
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=pause-142925
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7022875d4a054c2d518e5e5a7b9d500799d50140
	                    minikube.k8s.io/name=pause-142925
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_05_31T19_25_57_0700
	                    minikube.k8s.io/version=v1.30.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 31 May 2023 19:25:52 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-142925
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 31 May 2023 19:27:23 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 31 May 2023 19:27:13 +0000   Wed, 31 May 2023 19:25:49 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 31 May 2023 19:27:13 +0000   Wed, 31 May 2023 19:25:49 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 31 May 2023 19:27:13 +0000   Wed, 31 May 2023 19:25:49 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 31 May 2023 19:27:13 +0000   Wed, 31 May 2023 19:26:39 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    pause-142925
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022624Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022624Ki
	  pods:               110
	System Info:
	  Machine ID:                 9bc224c0e0844d30b914477be348d6ab
	  System UUID:                d716a0cd-3b16-41b8-bb62-dfaa9f48779d
	  Boot ID:                    35429d0f-2ece-432d-a992-d9f8cda99d9c
	  Kernel Version:             5.15.0-1037-aws
	  OS Image:                   Ubuntu 22.04.2 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.5
	  Kubelet Version:            v1.27.2
	  Kube-Proxy Version:         v1.27.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5d78c9869d-pkjvx                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     82s
	  kube-system                 etcd-pause-142925                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         94s
	  kube-system                 kindnet-tj2db                           100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      82s
	  kube-system                 kube-apiserver-pause-142925             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         95s
	  kube-system                 kube-controller-manager-pause-142925    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         94s
	  kube-system                 kube-proxy-hrhmq                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         82s
	  kube-system                 kube-scheduler-pause-142925             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         97s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%!)(MISSING)  100m (5%!)(MISSING)
	  memory             220Mi (2%!)(MISSING)  220Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 80s                  kube-proxy       
	  Normal  Starting                 13s                  kube-proxy       
	  Normal  Starting                 103s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  103s (x8 over 103s)  kubelet          Node pause-142925 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    103s (x8 over 103s)  kubelet          Node pause-142925 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     103s (x8 over 103s)  kubelet          Node pause-142925 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    95s                  kubelet          Node pause-142925 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  95s                  kubelet          Node pause-142925 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     95s                  kubelet          Node pause-142925 status is now: NodeHasSufficientPID
	  Normal  Starting                 95s                  kubelet          Starting kubelet.
	  Normal  RegisteredNode           83s                  node-controller  Node pause-142925 event: Registered Node pause-142925 in Controller
	  Normal  NodeReady                51s                  kubelet          Node pause-142925 status is now: NodeReady
	  Normal  Starting                 23s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  22s (x8 over 22s)    kubelet          Node pause-142925 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    22s (x8 over 22s)    kubelet          Node pause-142925 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     22s (x8 over 22s)    kubelet          Node pause-142925 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           4s                   node-controller  Node pause-142925 event: Registered Node pause-142925 in Controller
	
	* 
	* ==> dmesg <==
	* [  +0.000741] FS-Cache: N-cookie c=0000000c [p=00000003 fl=2 nc=0 na=1]
	[  +0.000954] FS-Cache: N-cookie d=0000000077ffde6d{9p.inode} n=0000000074ec7799
	[  +0.001241] FS-Cache: N-key=[8] '915b3b0000000000'
	[  +0.003042] FS-Cache: Duplicate cookie detected
	[  +0.000777] FS-Cache: O-cookie c=00000006 [p=00000003 fl=226 nc=0 na=1]
	[  +0.000979] FS-Cache: O-cookie d=0000000077ffde6d{9p.inode} n=0000000031e1563a
	[  +0.001057] FS-Cache: O-key=[8] '915b3b0000000000'
	[  +0.000743] FS-Cache: N-cookie c=0000000d [p=00000003 fl=2 nc=0 na=1]
	[  +0.000985] FS-Cache: N-cookie d=0000000077ffde6d{9p.inode} n=000000007278ef73
	[  +0.001110] FS-Cache: N-key=[8] '915b3b0000000000'
	[  +2.905928] FS-Cache: Duplicate cookie detected
	[  +0.000862] FS-Cache: O-cookie c=00000004 [p=00000003 fl=226 nc=0 na=1]
	[  +0.001154] FS-Cache: O-cookie d=0000000077ffde6d{9p.inode} n=00000000ad00c953
	[  +0.001219] FS-Cache: O-key=[8] '905b3b0000000000'
	[  +0.000792] FS-Cache: N-cookie c=0000000f [p=00000003 fl=2 nc=0 na=1]
	[  +0.001108] FS-Cache: N-cookie d=0000000077ffde6d{9p.inode} n=00000000be9b4fe0
	[  +0.001229] FS-Cache: N-key=[8] '905b3b0000000000'
	[  +0.280333] FS-Cache: Duplicate cookie detected
	[  +0.000717] FS-Cache: O-cookie c=00000009 [p=00000003 fl=226 nc=0 na=1]
	[  +0.000979] FS-Cache: O-cookie d=0000000077ffde6d{9p.inode} n=000000003fd4f91a
	[  +0.001109] FS-Cache: O-key=[8] '985b3b0000000000'
	[  +0.000717] FS-Cache: N-cookie c=00000010 [p=00000003 fl=2 nc=0 na=1]
	[  +0.000957] FS-Cache: N-cookie d=0000000077ffde6d{9p.inode} n=0000000074ec7799
	[  +0.001067] FS-Cache: N-key=[8] '985b3b0000000000'
	[  +9.760834] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	
	* 
	* ==> etcd [1a52edf685c10d8c8996c6464f17bd55b67f89642dc7f2b0b0b72e885ce088fd] <==
	* {"level":"info","ts":"2023-05-31T19:27:09.187Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-05-31T19:27:09.186Z","caller":"etcdserver/server.go:754","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2023-05-31T19:27:09.187Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-05-31T19:27:09.194Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-05-31T19:27:09.194Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-05-31T19:27:09.173Z","caller":"embed/etcd.go:586","msg":"serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2023-05-31T19:27:09.195Z","caller":"embed/etcd.go:558","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2023-05-31T19:27:09.187Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 switched to configuration voters=(16896983918768216326)"}
	{"level":"info","ts":"2023-05-31T19:27:09.195Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","added-peer-id":"ea7e25599daad906","added-peer-peer-urls":["https://192.168.76.2:2380"]}
	{"level":"info","ts":"2023-05-31T19:27:09.195Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.5"}
	{"level":"info","ts":"2023-05-31T19:27:09.195Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-05-31T19:27:10.292Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 is starting a new election at term 2"}
	{"level":"info","ts":"2023-05-31T19:27:10.292Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became pre-candidate at term 2"}
	{"level":"info","ts":"2023-05-31T19:27:10.292Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2023-05-31T19:27:10.292Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became candidate at term 3"}
	{"level":"info","ts":"2023-05-31T19:27:10.292Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2023-05-31T19:27:10.292Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 3"}
	{"level":"info","ts":"2023-05-31T19:27:10.292Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2023-05-31T19:27:10.296Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:pause-142925 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2023-05-31T19:27:10.296Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-05-31T19:27:10.296Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-05-31T19:27:10.296Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-05-31T19:27:10.296Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-05-31T19:27:10.297Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-05-31T19:27:10.300Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"192.168.76.2:2379"}
	
	* 
	* ==> etcd [7ee1670fa5d41728bdabc13111b625c2ed395d205a748387291c212bf8c167df] <==
	* {"level":"warn","ts":"2023-05-31T19:26:53.110Z","caller":"flags/flag.go:93","msg":"unrecognized environment variable","environment-variable":"ETCD_UNSUPPORTED_ARCH=arm64"}
	{"level":"info","ts":"2023-05-31T19:26:53.112Z","caller":"etcdmain/etcd.go:73","msg":"Running: ","args":["etcd","--advertise-client-urls=https://192.168.76.2:2379","--cert-file=/var/lib/minikube/certs/etcd/server.crt","--client-cert-auth=true","--data-dir=/var/lib/minikube/etcd","--experimental-initial-corrupt-check=true","--experimental-watch-progress-notify-interval=5s","--initial-advertise-peer-urls=https://192.168.76.2:2380","--initial-cluster=pause-142925=https://192.168.76.2:2380","--key-file=/var/lib/minikube/certs/etcd/server.key","--listen-client-urls=https://127.0.0.1:2379,https://192.168.76.2:2379","--listen-metrics-urls=http://127.0.0.1:2381","--listen-peer-urls=https://192.168.76.2:2380","--name=pause-142925","--peer-cert-file=/var/lib/minikube/certs/etcd/peer.crt","--peer-client-cert-auth=true","--peer-key-file=/var/lib/minikube/certs/etcd/peer.key","--peer-trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt","--proxy-refresh-interval=70000","--snapshot-count=10000","--trusted-ca-file=/
var/lib/minikube/certs/etcd/ca.crt"]}
	{"level":"info","ts":"2023-05-31T19:26:53.119Z","caller":"etcdmain/etcd.go:116","msg":"server has been already initialized","data-dir":"/var/lib/minikube/etcd","dir-type":"member"}
	{"level":"info","ts":"2023-05-31T19:26:53.120Z","caller":"embed/etcd.go:124","msg":"configuring peer listeners","listen-peer-urls":["https://192.168.76.2:2380"]}
	{"level":"info","ts":"2023-05-31T19:26:53.120Z","caller":"embed/etcd.go:484","msg":"starting with peer TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/peer.crt, key = /var/lib/minikube/certs/etcd/peer.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-05-31T19:26:53.120Z","caller":"embed/etcd.go:132","msg":"configuring client listeners","listen-client-urls":["https://127.0.0.1:2379","https://192.168.76.2:2379"]}
	{"level":"info","ts":"2023-05-31T19:26:53.121Z","caller":"embed/etcd.go:306","msg":"starting an etcd server","etcd-version":"3.5.7","git-sha":"215b53cf3","go-version":"go1.17.13","go-os":"linux","go-arch":"arm64","max-cpu-set":2,"max-cpu-available":2,"member-initialized":true,"name":"pause-142925","data-dir":"/var/lib/minikube/etcd","wal-dir":"","wal-dir-dedicated":"","member-dir":"/var/lib/minikube/etcd/member","force-new-cluster":false,"heartbeat-interval":"100ms","election-timeout":"1s","initial-election-tick-advance":true,"snapshot-count":10000,"max-wals":5,"max-snapshots":5,"snapshot-catchup-entries":5000,"initial-advertise-peer-urls":["https://192.168.76.2:2380"],"listen-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.76.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"],"cors":["*"],"host-whitelist":["*"],"initial-cluster":"","initial-cluster-state":"new","initial-cluster-token"
:"","quota-backend-bytes":2147483648,"max-request-bytes":1572864,"max-concurrent-streams":4294967295,"pre-vote":true,"initial-corrupt-check":true,"corrupt-check-time-interval":"0s","compact-check-time-enabled":false,"compact-check-time-interval":"1m0s","auto-compaction-mode":"periodic","auto-compaction-retention":"0s","auto-compaction-interval":"0s","discovery-url":"","discovery-proxy":"","downgrade-check-interval":"5s"}
	{"level":"info","ts":"2023-05-31T19:26:53.150Z","caller":"etcdserver/backend.go:81","msg":"opened backend db","path":"/var/lib/minikube/etcd/member/snap/db","took":"5.91824ms"}
	{"level":"info","ts":"2023-05-31T19:26:53.219Z","caller":"etcdserver/server.go:530","msg":"No snapshot found. Recovering WAL from scratch!"}
	
	* 
	* ==> kernel <==
	*  19:27:31 up  1:09,  0 users,  load average: 3.20, 2.57, 1.93
	Linux pause-142925 5.15.0-1037-aws #41~20.04.1-Ubuntu SMP Mon May 22 18:20:20 UTC 2023 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.2 LTS"
	
	* 
	* ==> kindnet [9273f7b7e15106f96991156699b30fa260e61482461e1588686d29e389333d78] <==
	* I0531 19:27:14.736899       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0531 19:27:14.736985       1 main.go:107] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I0531 19:27:14.737110       1 main.go:116] setting mtu 1500 for CNI 
	I0531 19:27:14.737120       1 main.go:146] kindnetd IP family: "ipv4"
	I0531 19:27:14.737134       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0531 19:27:15.158632       1 main.go:223] Handling node with IPs: map[192.168.76.2:{}]
	I0531 19:27:15.158668       1 main.go:227] handling current node
	I0531 19:27:25.244972       1 main.go:223] Handling node with IPs: map[192.168.76.2:{}]
	I0531 19:27:25.245095       1 main.go:227] handling current node
	
	* 
	* ==> kindnet [ebefff5a4557aa51ef59f346ac5bd484e9300fed7b41c4dfd142d34f42af2a35] <==
	* I0531 19:26:52.987636       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0531 19:26:52.988115       1 main.go:107] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I0531 19:26:52.988287       1 main.go:116] setting mtu 1500 for CNI 
	I0531 19:26:52.988598       1 main.go:146] kindnetd IP family: "ipv4"
	I0531 19:26:52.988696       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	
	* 
	* ==> kube-apiserver [53d9487d7e1b556075195bfb0286d52f52853cdea8e81c656c257bcf3e6f01d3] <==
	* 
	* 
	* ==> kube-apiserver [85bfc1cda7f0b6431f5784df20c83004af310b71189266c271dd5ea12229a281] <==
	* I0531 19:27:13.171738       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0531 19:27:13.171809       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0531 19:27:13.171950       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0531 19:27:13.178345       1 available_controller.go:423] Starting AvailableConditionController
	I0531 19:27:13.178437       1 cache.go:32] Waiting for caches to sync for AvailableConditionController controller
	I0531 19:27:13.178526       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0531 19:27:13.186298       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0531 19:27:13.289261       1 shared_informer.go:311] Waiting for caches to sync for crd-autoregister
	I0531 19:27:13.600105       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0531 19:27:13.601766       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0531 19:27:13.613115       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0531 19:27:13.617519       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0531 19:27:13.679505       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0531 19:27:13.691906       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0531 19:27:13.692018       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0531 19:27:13.725134       1 cache.go:39] Caches are synced for autoregister controller
	I0531 19:27:13.729909       1 apf_controller.go:366] Running API Priority and Fairness config worker
	I0531 19:27:13.730017       1 apf_controller.go:369] Running API Priority and Fairness periodic rebalancing process
	I0531 19:27:13.730800       1 shared_informer.go:318] Caches are synced for configmaps
	I0531 19:27:14.226607       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0531 19:27:16.933986       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0531 19:27:17.156439       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0531 19:27:17.186940       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0531 19:27:17.289609       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0531 19:27:17.303164       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	
	* 
	* ==> kube-controller-manager [ccff2afb9f0d7350e36e7ec256289cccb2c5a8cfe9135602ab761d803ef779e5] <==
	* 
	* 
	* ==> kube-controller-manager [f34040e109e2a21d20611802eca9c1ae6345b0cf236a31620b512fb6742ff23b] <==
	* I0531 19:27:26.684014       1 taint_manager.go:211] "Sending events to api server"
	I0531 19:27:26.682465       1 event.go:307] "Event occurred" object="pause-142925" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node pause-142925 event: Registered Node pause-142925 in Controller"
	I0531 19:27:26.683944       1 shared_informer.go:318] Caches are synced for PV protection
	I0531 19:27:26.688258       1 shared_informer.go:318] Caches are synced for crt configmap
	I0531 19:27:26.704724       1 shared_informer.go:318] Caches are synced for deployment
	I0531 19:27:26.707741       1 shared_informer.go:318] Caches are synced for GC
	I0531 19:27:26.716259       1 shared_informer.go:318] Caches are synced for HPA
	I0531 19:27:26.716403       1 shared_informer.go:318] Caches are synced for daemon sets
	I0531 19:27:26.730504       1 shared_informer.go:318] Caches are synced for service account
	I0531 19:27:26.732011       1 shared_informer.go:318] Caches are synced for namespace
	I0531 19:27:26.751734       1 shared_informer.go:318] Caches are synced for attach detach
	I0531 19:27:26.768293       1 shared_informer.go:318] Caches are synced for ephemeral
	I0531 19:27:26.768311       1 shared_informer.go:318] Caches are synced for endpoint
	I0531 19:27:26.772882       1 shared_informer.go:318] Caches are synced for expand
	I0531 19:27:26.789012       1 shared_informer.go:318] Caches are synced for endpoint_slice_mirroring
	I0531 19:27:26.789120       1 shared_informer.go:318] Caches are synced for endpoint_slice
	I0531 19:27:26.802051       1 shared_informer.go:318] Caches are synced for stateful set
	I0531 19:27:26.833475       1 shared_informer.go:318] Caches are synced for disruption
	I0531 19:27:26.841834       1 shared_informer.go:318] Caches are synced for PVC protection
	I0531 19:27:26.843147       1 shared_informer.go:318] Caches are synced for persistent volume
	I0531 19:27:26.882423       1 shared_informer.go:318] Caches are synced for resource quota
	I0531 19:27:26.894025       1 shared_informer.go:318] Caches are synced for resource quota
	I0531 19:27:27.200457       1 shared_informer.go:318] Caches are synced for garbage collector
	I0531 19:27:27.200492       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I0531 19:27:27.283373       1 shared_informer.go:318] Caches are synced for garbage collector
	
	* 
	* ==> kube-proxy [478eaf89680befbce2bd6c495bd58bce3c8fc95be30d948c1b7a09dbeb666faf] <==
	* 
	* 
	* ==> kube-proxy [c11065f5bc5427833241713211d2b26af3f39670c636150e5f6372db8a1aa6eb] <==
	* I0531 19:27:16.745879       1 node.go:141] Successfully retrieved node IP: 192.168.76.2
	I0531 19:27:16.746139       1 server_others.go:110] "Detected node IP" address="192.168.76.2"
	I0531 19:27:16.746198       1 server_others.go:551] "Using iptables proxy"
	I0531 19:27:16.889673       1 server_others.go:190] "Using iptables Proxier"
	I0531 19:27:16.889774       1 server_others.go:197] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0531 19:27:16.889806       1 server_others.go:198] "Creating dualStackProxier for iptables"
	I0531 19:27:16.889850       1 server_others.go:481] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, defaulting to no-op detect-local for IPv6"
	I0531 19:27:16.889942       1 proxier.go:253] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0531 19:27:16.890540       1 server.go:657] "Version info" version="v1.27.2"
	I0531 19:27:16.896853       1 server.go:659] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0531 19:27:16.897843       1 config.go:188] "Starting service config controller"
	I0531 19:27:16.898075       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0531 19:27:16.908915       1 config.go:97] "Starting endpoint slice config controller"
	I0531 19:27:16.909006       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0531 19:27:16.909647       1 config.go:315] "Starting node config controller"
	I0531 19:27:16.909701       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0531 19:27:17.009572       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0531 19:27:17.009619       1 shared_informer.go:318] Caches are synced for service config
	I0531 19:27:17.011048       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [b7eeb9e9931e50ff612422f3fc028a906b4392080447dc3fb66403d47e63ac4c] <==
	* I0531 19:27:04.752543       1 serving.go:348] Generated self-signed cert in-memory
	W0531 19:27:05.604610       1 authentication.go:368] Error looking up in-cluster authentication configuration: Get "https://192.168.76.2:8443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp 192.168.76.2:8443: connect: connection refused
	W0531 19:27:05.604640       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0531 19:27:05.604647       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0531 19:27:05.608016       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.27.2"
	I0531 19:27:05.608057       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0531 19:27:05.609542       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0531 19:27:05.609637       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0531 19:27:05.609695       1 shared_informer.go:314] unable to sync caches for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0531 19:27:05.609730       1 configmap_cafile_content.go:210] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0531 19:27:05.610299       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0531 19:27:05.610415       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0531 19:27:05.610457       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0531 19:27:05.610577       1 secure_serving.go:255] Stopped listening on 127.0.0.1:10259
	E0531 19:27:05.610717       1 run.go:74] "command failed" err="finished without leader elect"
	
	* 
	* ==> kube-scheduler [c659c1625e37929308cc4ad220da1e17657ca23ffc4f9d6775ade9a8c8eb4d92] <==
	* I0531 19:27:11.813949       1 serving.go:348] Generated self-signed cert in-memory
	I0531 19:27:16.011346       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.27.2"
	I0531 19:27:16.011384       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0531 19:27:16.097240       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0531 19:27:16.097403       1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
	I0531 19:27:16.097425       1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController
	I0531 19:27:16.097459       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0531 19:27:16.097476       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0531 19:27:16.097491       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0531 19:27:16.097497       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0531 19:27:16.097518       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0531 19:27:16.198101       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0531 19:27:16.198231       1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController
	I0531 19:27:16.198338       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* May 31 19:27:08 pause-142925 kubelet[3078]: E0531 19:27:08.867692    3078 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	May 31 19:27:09 pause-142925 kubelet[3078]: I0531 19:27:09.543901    3078 kubelet_node_status.go:70] "Attempting to register node" node="pause-142925"
	May 31 19:27:13 pause-142925 kubelet[3078]: I0531 19:27:13.636605    3078 kubelet_node_status.go:108] "Node was previously registered" node="pause-142925"
	May 31 19:27:13 pause-142925 kubelet[3078]: I0531 19:27:13.636722    3078 kubelet_node_status.go:73] "Successfully registered node" node="pause-142925"
	May 31 19:27:13 pause-142925 kubelet[3078]: I0531 19:27:13.639279    3078 kuberuntime_manager.go:1460] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	May 31 19:27:13 pause-142925 kubelet[3078]: I0531 19:27:13.640018    3078 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	May 31 19:27:14 pause-142925 kubelet[3078]: I0531 19:27:14.023872    3078 apiserver.go:52] "Watching apiserver"
	May 31 19:27:14 pause-142925 kubelet[3078]: I0531 19:27:14.028232    3078 topology_manager.go:212] "Topology Admit Handler"
	May 31 19:27:14 pause-142925 kubelet[3078]: I0531 19:27:14.028343    3078 topology_manager.go:212] "Topology Admit Handler"
	May 31 19:27:14 pause-142925 kubelet[3078]: I0531 19:27:14.028413    3078 topology_manager.go:212] "Topology Admit Handler"
	May 31 19:27:14 pause-142925 kubelet[3078]: I0531 19:27:14.031105    3078 desired_state_of_world_populator.go:153] "Finished populating initial desired state of world"
	May 31 19:27:14 pause-142925 kubelet[3078]: I0531 19:27:14.068009    3078 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/41ccce12-4ef2-49e2-9bbd-a664a715e971-lib-modules\") pod \"kube-proxy-hrhmq\" (UID: \"41ccce12-4ef2-49e2-9bbd-a664a715e971\") " pod="kube-system/kube-proxy-hrhmq"
	May 31 19:27:14 pause-142925 kubelet[3078]: I0531 19:27:14.068060    3078 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/684be7a1-9260-4d7d-afe4-22eba3383872-config-volume\") pod \"coredns-5d78c9869d-pkjvx\" (UID: \"684be7a1-9260-4d7d-afe4-22eba3383872\") " pod="kube-system/coredns-5d78c9869d-pkjvx"
	May 31 19:27:14 pause-142925 kubelet[3078]: I0531 19:27:14.068093    3078 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/78b57b57-65bd-42d1-9c09-929951cdcb97-xtables-lock\") pod \"kindnet-tj2db\" (UID: \"78b57b57-65bd-42d1-9c09-929951cdcb97\") " pod="kube-system/kindnet-tj2db"
	May 31 19:27:14 pause-142925 kubelet[3078]: I0531 19:27:14.068118    3078 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q78pt\" (UniqueName: \"kubernetes.io/projected/41ccce12-4ef2-49e2-9bbd-a664a715e971-kube-api-access-q78pt\") pod \"kube-proxy-hrhmq\" (UID: \"41ccce12-4ef2-49e2-9bbd-a664a715e971\") " pod="kube-system/kube-proxy-hrhmq"
	May 31 19:27:14 pause-142925 kubelet[3078]: I0531 19:27:14.068143    3078 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wrtlz\" (UniqueName: \"kubernetes.io/projected/684be7a1-9260-4d7d-afe4-22eba3383872-kube-api-access-wrtlz\") pod \"coredns-5d78c9869d-pkjvx\" (UID: \"684be7a1-9260-4d7d-afe4-22eba3383872\") " pod="kube-system/coredns-5d78c9869d-pkjvx"
	May 31 19:27:14 pause-142925 kubelet[3078]: I0531 19:27:14.068166    3078 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/41ccce12-4ef2-49e2-9bbd-a664a715e971-xtables-lock\") pod \"kube-proxy-hrhmq\" (UID: \"41ccce12-4ef2-49e2-9bbd-a664a715e971\") " pod="kube-system/kube-proxy-hrhmq"
	May 31 19:27:14 pause-142925 kubelet[3078]: I0531 19:27:14.068188    3078 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/78b57b57-65bd-42d1-9c09-929951cdcb97-cni-cfg\") pod \"kindnet-tj2db\" (UID: \"78b57b57-65bd-42d1-9c09-929951cdcb97\") " pod="kube-system/kindnet-tj2db"
	May 31 19:27:14 pause-142925 kubelet[3078]: I0531 19:27:14.068210    3078 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/78b57b57-65bd-42d1-9c09-929951cdcb97-lib-modules\") pod \"kindnet-tj2db\" (UID: \"78b57b57-65bd-42d1-9c09-929951cdcb97\") " pod="kube-system/kindnet-tj2db"
	May 31 19:27:14 pause-142925 kubelet[3078]: I0531 19:27:14.068242    3078 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qgfnl\" (UniqueName: \"kubernetes.io/projected/78b57b57-65bd-42d1-9c09-929951cdcb97-kube-api-access-qgfnl\") pod \"kindnet-tj2db\" (UID: \"78b57b57-65bd-42d1-9c09-929951cdcb97\") " pod="kube-system/kindnet-tj2db"
	May 31 19:27:14 pause-142925 kubelet[3078]: I0531 19:27:14.068266    3078 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/41ccce12-4ef2-49e2-9bbd-a664a715e971-kube-proxy\") pod \"kube-proxy-hrhmq\" (UID: \"41ccce12-4ef2-49e2-9bbd-a664a715e971\") " pod="kube-system/kube-proxy-hrhmq"
	May 31 19:27:14 pause-142925 kubelet[3078]: I0531 19:27:14.068281    3078 reconciler.go:41] "Reconciler: start to sync state"
	May 31 19:27:14 pause-142925 kubelet[3078]: I0531 19:27:14.328953    3078 scope.go:115] "RemoveContainer" containerID="ebefff5a4557aa51ef59f346ac5bd484e9300fed7b41c4dfd142d34f42af2a35"
	May 31 19:27:14 pause-142925 kubelet[3078]: I0531 19:27:14.331177    3078 scope.go:115] "RemoveContainer" containerID="aecfb452ce5e36e1f9750b8b696e43d37fe95157384a205ac6924bc948c6c429"
	May 31 19:27:14 pause-142925 kubelet[3078]: I0531 19:27:14.331499    3078 scope.go:115] "RemoveContainer" containerID="478eaf89680befbce2bd6c495bd58bce3c8fc95be30d948c1b7a09dbeb666faf"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-142925 -n pause-142925
helpers_test.go:261: (dbg) Run:  kubectl --context pause-142925 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect pause-142925
helpers_test.go:235: (dbg) docker inspect pause-142925:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "621845fcc6fd2e5cff9530c4e999c0faacc75443d3b2bab32de290cb4e7e8f79",
	        "Created": "2023-05-31T19:25:33.629379075Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 131258,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-05-31T19:25:34.004434783Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:dee4774de4f99268e16c76379a36a4607bed47635a069a7e60c17cd24d9aaa76",
	        "ResolvConfPath": "/var/lib/docker/containers/621845fcc6fd2e5cff9530c4e999c0faacc75443d3b2bab32de290cb4e7e8f79/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/621845fcc6fd2e5cff9530c4e999c0faacc75443d3b2bab32de290cb4e7e8f79/hostname",
	        "HostsPath": "/var/lib/docker/containers/621845fcc6fd2e5cff9530c4e999c0faacc75443d3b2bab32de290cb4e7e8f79/hosts",
	        "LogPath": "/var/lib/docker/containers/621845fcc6fd2e5cff9530c4e999c0faacc75443d3b2bab32de290cb4e7e8f79/621845fcc6fd2e5cff9530c4e999c0faacc75443d3b2bab32de290cb4e7e8f79-json.log",
	        "Name": "/pause-142925",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-142925:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "pause-142925",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2147483648,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4294967296,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/a40260018971da61143bd652a5a26eab3ad58faf3058c1b2351592bd939bf784-init/diff:/var/lib/docker/overlay2/548bced7e749d102323bab71db162b075785f916e2a896d29f3adc2c3d7fbea8/diff",
	                "MergedDir": "/var/lib/docker/overlay2/a40260018971da61143bd652a5a26eab3ad58faf3058c1b2351592bd939bf784/merged",
	                "UpperDir": "/var/lib/docker/overlay2/a40260018971da61143bd652a5a26eab3ad58faf3058c1b2351592bd939bf784/diff",
	                "WorkDir": "/var/lib/docker/overlay2/a40260018971da61143bd652a5a26eab3ad58faf3058c1b2351592bd939bf784/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-142925",
	                "Source": "/var/lib/docker/volumes/pause-142925/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-142925",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-142925",
	                "name.minikube.sigs.k8s.io": "pause-142925",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "e504dbe60eba776011d04c15569218a453d7b933c958c8b12d242c6d073d5d26",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32974"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32973"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32970"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32972"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32971"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/e504dbe60eba",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-142925": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "621845fcc6fd",
	                        "pause-142925"
	                    ],
	                    "NetworkID": "0d663842a08da493e682e79b1106b90ddf6db82405fe988e860d90e528e3e11f",
	                    "EndpointID": "f5be19752cd229708b2c335f8917935a22e19a4b172c43d62fe9db4304c36031",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p pause-142925 -n pause-142925
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p pause-142925 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p pause-142925 logs -n 25: (2.825509616s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |           Profile           |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p insufficient-storage-158364 | insufficient-storage-158364 | jenkins | v1.30.1 | 31 May 23 19:19 UTC | 31 May 23 19:19 UTC |
	| start   | -p NoKubernetes-969645         | NoKubernetes-969645         | jenkins | v1.30.1 | 31 May 23 19:19 UTC |                     |
	|         | --no-kubernetes                |                             |         |         |                     |                     |
	|         | --kubernetes-version=1.20      |                             |         |         |                     |                     |
	|         | --driver=docker                |                             |         |         |                     |                     |
	|         | --container-runtime=crio       |                             |         |         |                     |                     |
	| start   | -p NoKubernetes-969645         | NoKubernetes-969645         | jenkins | v1.30.1 | 31 May 23 19:19 UTC | 31 May 23 19:20 UTC |
	|         | --driver=docker                |                             |         |         |                     |                     |
	|         | --container-runtime=crio       |                             |         |         |                     |                     |
	| start   | -p NoKubernetes-969645         | NoKubernetes-969645         | jenkins | v1.30.1 | 31 May 23 19:20 UTC | 31 May 23 19:20 UTC |
	|         | --no-kubernetes                |                             |         |         |                     |                     |
	|         | --driver=docker                |                             |         |         |                     |                     |
	|         | --container-runtime=crio       |                             |         |         |                     |                     |
	| delete  | -p NoKubernetes-969645         | NoKubernetes-969645         | jenkins | v1.30.1 | 31 May 23 19:20 UTC | 31 May 23 19:20 UTC |
	| start   | -p NoKubernetes-969645         | NoKubernetes-969645         | jenkins | v1.30.1 | 31 May 23 19:20 UTC | 31 May 23 19:20 UTC |
	|         | --no-kubernetes                |                             |         |         |                     |                     |
	|         | --driver=docker                |                             |         |         |                     |                     |
	|         | --container-runtime=crio       |                             |         |         |                     |                     |
	| ssh     | -p NoKubernetes-969645 sudo    | NoKubernetes-969645         | jenkins | v1.30.1 | 31 May 23 19:20 UTC |                     |
	|         | systemctl is-active --quiet    |                             |         |         |                     |                     |
	|         | service kubelet                |                             |         |         |                     |                     |
	| stop    | -p NoKubernetes-969645         | NoKubernetes-969645         | jenkins | v1.30.1 | 31 May 23 19:20 UTC | 31 May 23 19:20 UTC |
	| start   | -p NoKubernetes-969645         | NoKubernetes-969645         | jenkins | v1.30.1 | 31 May 23 19:20 UTC | 31 May 23 19:20 UTC |
	|         | --driver=docker                |                             |         |         |                     |                     |
	|         | --container-runtime=crio       |                             |         |         |                     |                     |
	| ssh     | -p NoKubernetes-969645 sudo    | NoKubernetes-969645         | jenkins | v1.30.1 | 31 May 23 19:20 UTC |                     |
	|         | systemctl is-active --quiet    |                             |         |         |                     |                     |
	|         | service kubelet                |                             |         |         |                     |                     |
	| delete  | -p NoKubernetes-969645         | NoKubernetes-969645         | jenkins | v1.30.1 | 31 May 23 19:20 UTC | 31 May 23 19:21 UTC |
	| start   | -p kubernetes-upgrade-843072   | kubernetes-upgrade-843072   | jenkins | v1.30.1 | 31 May 23 19:21 UTC | 31 May 23 19:22 UTC |
	|         | --memory=2200                  |                             |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0   |                             |         |         |                     |                     |
	|         | --alsologtostderr              |                             |         |         |                     |                     |
	|         | -v=1 --driver=docker           |                             |         |         |                     |                     |
	|         | --container-runtime=crio       |                             |         |         |                     |                     |
	| delete  | -p missing-upgrade-915836      | missing-upgrade-915836      | jenkins | v1.30.1 | 31 May 23 19:21 UTC | 31 May 23 19:21 UTC |
	| stop    | -p kubernetes-upgrade-843072   | kubernetes-upgrade-843072   | jenkins | v1.30.1 | 31 May 23 19:22 UTC | 31 May 23 19:22 UTC |
	| start   | -p kubernetes-upgrade-843072   | kubernetes-upgrade-843072   | jenkins | v1.30.1 | 31 May 23 19:22 UTC | 31 May 23 19:26 UTC |
	|         | --memory=2200                  |                             |         |         |                     |                     |
	|         | --kubernetes-version=v1.27.2   |                             |         |         |                     |                     |
	|         | --alsologtostderr              |                             |         |         |                     |                     |
	|         | -v=1 --driver=docker           |                             |         |         |                     |                     |
	|         | --container-runtime=crio       |                             |         |         |                     |                     |
	| start   | -p stopped-upgrade-577066      | stopped-upgrade-577066      | jenkins | v1.30.1 | 31 May 23 19:23 UTC |                     |
	|         | --memory=2200                  |                             |         |         |                     |                     |
	|         | --alsologtostderr              |                             |         |         |                     |                     |
	|         | -v=1 --driver=docker           |                             |         |         |                     |                     |
	|         | --container-runtime=crio       |                             |         |         |                     |                     |
	| delete  | -p stopped-upgrade-577066      | stopped-upgrade-577066      | jenkins | v1.30.1 | 31 May 23 19:24 UTC | 31 May 23 19:24 UTC |
	| start   | -p running-upgrade-862679      | running-upgrade-862679      | jenkins | v1.30.1 | 31 May 23 19:25 UTC |                     |
	|         | --memory=2200                  |                             |         |         |                     |                     |
	|         | --alsologtostderr              |                             |         |         |                     |                     |
	|         | -v=1 --driver=docker           |                             |         |         |                     |                     |
	|         | --container-runtime=crio       |                             |         |         |                     |                     |
	| delete  | -p running-upgrade-862679      | running-upgrade-862679      | jenkins | v1.30.1 | 31 May 23 19:25 UTC | 31 May 23 19:25 UTC |
	| start   | -p pause-142925 --memory=2048  | pause-142925                | jenkins | v1.30.1 | 31 May 23 19:25 UTC | 31 May 23 19:26 UTC |
	|         | --install-addons=false         |                             |         |         |                     |                     |
	|         | --wait=all --driver=docker     |                             |         |         |                     |                     |
	|         | --container-runtime=crio       |                             |         |         |                     |                     |
	| start   | -p pause-142925                | pause-142925                | jenkins | v1.30.1 | 31 May 23 19:26 UTC | 31 May 23 19:27 UTC |
	|         | --alsologtostderr              |                             |         |         |                     |                     |
	|         | -v=1 --driver=docker           |                             |         |         |                     |                     |
	|         | --container-runtime=crio       |                             |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-843072   | kubernetes-upgrade-843072   | jenkins | v1.30.1 | 31 May 23 19:26 UTC |                     |
	|         | --memory=2200                  |                             |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0   |                             |         |         |                     |                     |
	|         | --driver=docker                |                             |         |         |                     |                     |
	|         | --container-runtime=crio       |                             |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-843072   | kubernetes-upgrade-843072   | jenkins | v1.30.1 | 31 May 23 19:26 UTC | 31 May 23 19:27 UTC |
	|         | --memory=2200                  |                             |         |         |                     |                     |
	|         | --kubernetes-version=v1.27.2   |                             |         |         |                     |                     |
	|         | --alsologtostderr              |                             |         |         |                     |                     |
	|         | -v=1 --driver=docker           |                             |         |         |                     |                     |
	|         | --container-runtime=crio       |                             |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-843072   | kubernetes-upgrade-843072   | jenkins | v1.30.1 | 31 May 23 19:27 UTC | 31 May 23 19:27 UTC |
	| start   | -p force-systemd-flag-124615   | force-systemd-flag-124615   | jenkins | v1.30.1 | 31 May 23 19:27 UTC |                     |
	|         | --memory=2048 --force-systemd  |                             |         |         |                     |                     |
	|         | --alsologtostderr              |                             |         |         |                     |                     |
	|         | -v=5 --driver=docker           |                             |         |         |                     |                     |
	|         | --container-runtime=crio       |                             |         |         |                     |                     |
	|---------|--------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/05/31 19:27:27
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.20.4 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0531 19:27:27.796806  139599 out.go:296] Setting OutFile to fd 1 ...
	I0531 19:27:27.796965  139599 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0531 19:27:27.796975  139599 out.go:309] Setting ErrFile to fd 2...
	I0531 19:27:27.796981  139599 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0531 19:27:27.797137  139599 root.go:336] Updating PATH: /home/jenkins/minikube-integration/16569-2389/.minikube/bin
	I0531 19:27:27.797532  139599 out.go:303] Setting JSON to false
	I0531 19:27:27.798671  139599 start.go:127] hostinfo: {"hostname":"ip-172-31-21-244","uptime":4193,"bootTime":1685557055,"procs":374,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1037-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0531 19:27:27.798780  139599 start.go:137] virtualization:  
	I0531 19:27:27.801240  139599 out.go:177] * [force-systemd-flag-124615] minikube v1.30.1 on Ubuntu 20.04 (arm64)
	I0531 19:27:27.803460  139599 out.go:177]   - MINIKUBE_LOCATION=16569
	I0531 19:27:27.805234  139599 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0531 19:27:27.803664  139599 notify.go:220] Checking for updates...
	I0531 19:27:27.808952  139599 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/16569-2389/kubeconfig
	I0531 19:27:27.810794  139599 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/16569-2389/.minikube
	I0531 19:27:27.812629  139599 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0531 19:27:27.814340  139599 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0531 19:27:25.958063  135811 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0531 19:27:26.124520  135811 start.go:889] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0531 19:27:26.124565  135811 node_ready.go:35] waiting up to 6m0s for node "pause-142925" to be "Ready" ...
	I0531 19:27:26.128586  135811 node_ready.go:49] node "pause-142925" has status "Ready":"True"
	I0531 19:27:26.128605  135811 node_ready.go:38] duration metric: took 4.026908ms waiting for node "pause-142925" to be "Ready" ...
	I0531 19:27:26.128613  135811 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0531 19:27:26.135699  135811 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5d78c9869d-pkjvx" in "kube-system" namespace to be "Ready" ...
	I0531 19:27:26.142552  135811 pod_ready.go:92] pod "coredns-5d78c9869d-pkjvx" in "kube-system" namespace has status "Ready":"True"
	I0531 19:27:26.142578  135811 pod_ready.go:81] duration metric: took 6.845799ms waiting for pod "coredns-5d78c9869d-pkjvx" in "kube-system" namespace to be "Ready" ...
	I0531 19:27:26.142591  135811 pod_ready.go:78] waiting up to 6m0s for pod "etcd-pause-142925" in "kube-system" namespace to be "Ready" ...
	I0531 19:27:26.267804  135811 pod_ready.go:92] pod "etcd-pause-142925" in "kube-system" namespace has status "Ready":"True"
	I0531 19:27:26.267831  135811 pod_ready.go:81] duration metric: took 125.231544ms waiting for pod "etcd-pause-142925" in "kube-system" namespace to be "Ready" ...
	I0531 19:27:26.267846  135811 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-pause-142925" in "kube-system" namespace to be "Ready" ...
	I0531 19:27:26.700233  135811 pod_ready.go:92] pod "kube-apiserver-pause-142925" in "kube-system" namespace has status "Ready":"True"
	I0531 19:27:26.700300  135811 pod_ready.go:81] duration metric: took 432.445838ms waiting for pod "kube-apiserver-pause-142925" in "kube-system" namespace to be "Ready" ...
	I0531 19:27:26.700327  135811 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-pause-142925" in "kube-system" namespace to be "Ready" ...
	I0531 19:27:27.068931  135811 pod_ready.go:92] pod "kube-controller-manager-pause-142925" in "kube-system" namespace has status "Ready":"True"
	I0531 19:27:27.068956  135811 pod_ready.go:81] duration metric: took 368.608831ms waiting for pod "kube-controller-manager-pause-142925" in "kube-system" namespace to be "Ready" ...
	I0531 19:27:27.068970  135811 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-hrhmq" in "kube-system" namespace to be "Ready" ...
	I0531 19:27:27.467287  135811 pod_ready.go:92] pod "kube-proxy-hrhmq" in "kube-system" namespace has status "Ready":"True"
	I0531 19:27:27.467309  135811 pod_ready.go:81] duration metric: took 398.331892ms waiting for pod "kube-proxy-hrhmq" in "kube-system" namespace to be "Ready" ...
	I0531 19:27:27.467323  135811 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-pause-142925" in "kube-system" namespace to be "Ready" ...
	I0531 19:27:27.817055  139599 config.go:182] Loaded profile config "pause-142925": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.2
	I0531 19:27:27.817255  139599 driver.go:375] Setting default libvirt URI to qemu:///system
	I0531 19:27:27.848938  139599 docker.go:121] docker version: linux-24.0.2:Docker Engine - Community
	I0531 19:27:27.849042  139599 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0531 19:27:27.951657  139599 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:6 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:45 SystemTime:2023-05-31 19:27:27.939156728 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1037-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215166976 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.5] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.18.1]] Warnings:<nil>}}
	I0531 19:27:27.951767  139599 docker.go:294] overlay module found
	I0531 19:27:27.953551  139599 out.go:177] * Using the docker driver based on user configuration
	I0531 19:27:27.955349  139599 start.go:297] selected driver: docker
	I0531 19:27:27.955368  139599 start.go:875] validating driver "docker" against <nil>
	I0531 19:27:27.955380  139599 start.go:886] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0531 19:27:27.956032  139599 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0531 19:27:28.023088  139599 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:6 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:45 SystemTime:2023-05-31 19:27:28.012120894 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1037-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215166976 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.5] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.18.1]] Warnings:<nil>}}
	I0531 19:27:28.023251  139599 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0531 19:27:28.023458  139599 start_flags.go:901] Wait components to verify : map[apiserver:true system_pods:true]
	I0531 19:27:28.025730  139599 out.go:177] * Using Docker driver with root privileges
	I0531 19:27:28.027633  139599 cni.go:84] Creating CNI manager for ""
	I0531 19:27:28.027657  139599 cni.go:142] "docker" driver + "crio" runtime found, recommending kindnet
	I0531 19:27:28.027667  139599 start_flags.go:314] Found "CNI" CNI - setting NetworkPlugin=cni
	I0531 19:27:28.027684  139599 start_flags.go:319] config:
	{Name:force-systemd-flag-124615 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:force-systemd-flag-124615 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0531 19:27:28.029691  139599 out.go:177] * Starting control plane node force-systemd-flag-124615 in cluster force-systemd-flag-124615
	I0531 19:27:28.031166  139599 cache.go:122] Beginning downloading kic base image for docker with crio
	I0531 19:27:28.032937  139599 out.go:177] * Pulling base image ...
	I0531 19:27:28.034646  139599 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime crio
	I0531 19:27:28.034698  139599 preload.go:148] Found local preload: /home/jenkins/minikube-integration/16569-2389/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-cri-o-overlay-arm64.tar.lz4
	I0531 19:27:28.034716  139599 cache.go:57] Caching tarball of preloaded images
	I0531 19:27:28.034798  139599 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 in local docker daemon
	I0531 19:27:28.034857  139599 preload.go:174] Found /home/jenkins/minikube-integration/16569-2389/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0531 19:27:28.034866  139599 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.2 on crio
	I0531 19:27:28.034981  139599 profile.go:148] Saving config to /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/force-systemd-flag-124615/config.json ...
	I0531 19:27:28.034998  139599 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/force-systemd-flag-124615/config.json: {Name:mkb5a567050c4d73e10be27c3cca047191345962 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 19:27:28.053029  139599 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 in local docker daemon, skipping pull
	I0531 19:27:28.053053  139599 cache.go:145] gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 exists in daemon, skipping load
	I0531 19:27:28.053075  139599 cache.go:195] Successfully downloaded all kic artifacts
	I0531 19:27:28.053121  139599 start.go:364] acquiring machines lock for force-systemd-flag-124615: {Name:mk87f044452f522f924326d91066ec90e6b98fcf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0531 19:27:28.053242  139599 start.go:368] acquired machines lock for "force-systemd-flag-124615" in 104.548µs
	I0531 19:27:28.053268  139599 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-124615 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:force-systemd-flag-124615 Namespace:default AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnet
ClientPath: SocketVMnetPath: StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0531 19:27:28.053348  139599 start.go:125] createHost starting for "" (driver="docker")
	I0531 19:27:27.867249  135811 pod_ready.go:92] pod "kube-scheduler-pause-142925" in "kube-system" namespace has status "Ready":"True"
	I0531 19:27:27.867268  135811 pod_ready.go:81] duration metric: took 399.938245ms waiting for pod "kube-scheduler-pause-142925" in "kube-system" namespace to be "Ready" ...
	I0531 19:27:27.867277  135811 pod_ready.go:38] duration metric: took 1.738654563s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0531 19:27:27.867292  135811 api_server.go:52] waiting for apiserver process to appear ...
	I0531 19:27:27.867347  135811 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 19:27:27.882436  135811 api_server.go:72] duration metric: took 1.929266731s to wait for apiserver process to appear ...
	I0531 19:27:27.882456  135811 api_server.go:88] waiting for apiserver healthz status ...
	I0531 19:27:27.882475  135811 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0531 19:27:27.892032  135811 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I0531 19:27:27.893580  135811 api_server.go:141] control plane version: v1.27.2
	I0531 19:27:27.893599  135811 api_server.go:131] duration metric: took 11.136038ms to wait for apiserver health ...
	I0531 19:27:27.893608  135811 system_pods.go:43] waiting for kube-system pods to appear ...
	I0531 19:27:28.071588  135811 system_pods.go:59] 7 kube-system pods found
	I0531 19:27:28.071665  135811 system_pods.go:61] "coredns-5d78c9869d-pkjvx" [684be7a1-9260-4d7d-afe4-22eba3383872] Running
	I0531 19:27:28.071686  135811 system_pods.go:61] "etcd-pause-142925" [62113c03-4eb3-4204-b5a1-3a827d4aebc3] Running
	I0531 19:27:28.071707  135811 system_pods.go:61] "kindnet-tj2db" [78b57b57-65bd-42d1-9c09-929951cdcb97] Running
	I0531 19:27:28.071748  135811 system_pods.go:61] "kube-apiserver-pause-142925" [1fe7f5b8-f674-4b80-84b9-05168e911372] Running
	I0531 19:27:28.071773  135811 system_pods.go:61] "kube-controller-manager-pause-142925" [418d12fe-7b0c-4df2-9a9f-0ee8d39ad60f] Running
	I0531 19:27:28.071797  135811 system_pods.go:61] "kube-proxy-hrhmq" [41ccce12-4ef2-49e2-9bbd-a664a715e971] Running
	I0531 19:27:28.071839  135811 system_pods.go:61] "kube-scheduler-pause-142925" [58ccb0c6-891d-4066-92b2-618238e7d456] Running
	I0531 19:27:28.071863  135811 system_pods.go:74] duration metric: took 178.249624ms to wait for pod list to return data ...
	I0531 19:27:28.071887  135811 default_sa.go:34] waiting for default service account to be created ...
	I0531 19:27:28.268788  135811 default_sa.go:45] found service account: "default"
	I0531 19:27:28.268812  135811 default_sa.go:55] duration metric: took 196.892481ms for default service account to be created ...
	I0531 19:27:28.268822  135811 system_pods.go:116] waiting for k8s-apps to be running ...
	I0531 19:27:28.473491  135811 system_pods.go:86] 7 kube-system pods found
	I0531 19:27:28.473525  135811 system_pods.go:89] "coredns-5d78c9869d-pkjvx" [684be7a1-9260-4d7d-afe4-22eba3383872] Running
	I0531 19:27:28.473533  135811 system_pods.go:89] "etcd-pause-142925" [62113c03-4eb3-4204-b5a1-3a827d4aebc3] Running
	I0531 19:27:28.473539  135811 system_pods.go:89] "kindnet-tj2db" [78b57b57-65bd-42d1-9c09-929951cdcb97] Running
	I0531 19:27:28.473544  135811 system_pods.go:89] "kube-apiserver-pause-142925" [1fe7f5b8-f674-4b80-84b9-05168e911372] Running
	I0531 19:27:28.473550  135811 system_pods.go:89] "kube-controller-manager-pause-142925" [418d12fe-7b0c-4df2-9a9f-0ee8d39ad60f] Running
	I0531 19:27:28.473773  135811 system_pods.go:89] "kube-proxy-hrhmq" [41ccce12-4ef2-49e2-9bbd-a664a715e971] Running
	I0531 19:27:28.473787  135811 system_pods.go:89] "kube-scheduler-pause-142925" [58ccb0c6-891d-4066-92b2-618238e7d456] Running
	I0531 19:27:28.473794  135811 system_pods.go:126] duration metric: took 204.96804ms to wait for k8s-apps to be running ...
	I0531 19:27:28.473802  135811 system_svc.go:44] waiting for kubelet service to be running ....
	I0531 19:27:28.473872  135811 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0531 19:27:28.512619  135811 system_svc.go:56] duration metric: took 38.806203ms WaitForService to wait for kubelet.
	I0531 19:27:28.512654  135811 kubeadm.go:581] duration metric: took 2.559490256s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0531 19:27:28.512673  135811 node_conditions.go:102] verifying NodePressure condition ...
	I0531 19:27:28.667947  135811 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0531 19:27:28.667972  135811 node_conditions.go:123] node cpu capacity is 2
	I0531 19:27:28.667983  135811 node_conditions.go:105] duration metric: took 155.304761ms to run NodePressure ...
	I0531 19:27:28.667993  135811 start.go:228] waiting for startup goroutines ...
	I0531 19:27:28.668000  135811 start.go:233] waiting for cluster config update ...
	I0531 19:27:28.668008  135811 start.go:242] writing updated cluster config ...
	I0531 19:27:28.668314  135811 ssh_runner.go:195] Run: rm -f paused
	I0531 19:27:28.764990  135811 start.go:573] kubectl: 1.27.2, cluster: 1.27.2 (minor skew: 0)
	I0531 19:27:28.768941  135811 out.go:177] * Done! kubectl is now configured to use "pause-142925" cluster and "default" namespace by default
	I0531 19:27:28.055443  139599 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0531 19:27:28.055693  139599 start.go:159] libmachine.API.Create for "force-systemd-flag-124615" (driver="docker")
	I0531 19:27:28.055724  139599 client.go:168] LocalClient.Create starting
	I0531 19:27:28.055813  139599 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/16569-2389/.minikube/certs/ca.pem
	I0531 19:27:28.055857  139599 main.go:141] libmachine: Decoding PEM data...
	I0531 19:27:28.055881  139599 main.go:141] libmachine: Parsing certificate...
	I0531 19:27:28.055943  139599 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/16569-2389/.minikube/certs/cert.pem
	I0531 19:27:28.055966  139599 main.go:141] libmachine: Decoding PEM data...
	I0531 19:27:28.055979  139599 main.go:141] libmachine: Parsing certificate...
	I0531 19:27:28.056355  139599 cli_runner.go:164] Run: docker network inspect force-systemd-flag-124615 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0531 19:27:28.078394  139599 cli_runner.go:211] docker network inspect force-systemd-flag-124615 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0531 19:27:28.078485  139599 network_create.go:281] running [docker network inspect force-systemd-flag-124615] to gather additional debugging logs...
	I0531 19:27:28.078507  139599 cli_runner.go:164] Run: docker network inspect force-systemd-flag-124615
	W0531 19:27:28.096206  139599 cli_runner.go:211] docker network inspect force-systemd-flag-124615 returned with exit code 1
	I0531 19:27:28.096232  139599 network_create.go:284] error running [docker network inspect force-systemd-flag-124615]: docker network inspect force-systemd-flag-124615: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network force-systemd-flag-124615 not found
	I0531 19:27:28.096252  139599 network_create.go:286] output of [docker network inspect force-systemd-flag-124615]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network force-systemd-flag-124615 not found
	
	** /stderr **
	I0531 19:27:28.096314  139599 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0531 19:27:28.115260  139599 network.go:214] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-84359259bfe9 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:f9:de:3f:c7} reservation:<nil>}
	I0531 19:27:28.115705  139599 network.go:214] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-36efdaa82add IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:02:42:49:a1:12:0f} reservation:<nil>}
	I0531 19:27:28.116289  139599 network.go:209] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40011ea2b0}
	I0531 19:27:28.116312  139599 network_create.go:123] attempt to create docker network force-systemd-flag-124615 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I0531 19:27:28.116372  139599 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-flag-124615 force-systemd-flag-124615
	I0531 19:27:28.194295  139599 network_create.go:107] docker network force-systemd-flag-124615 192.168.67.0/24 created
	I0531 19:27:28.194325  139599 kic.go:117] calculated static IP "192.168.67.2" for the "force-systemd-flag-124615" container
	I0531 19:27:28.194630  139599 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0531 19:27:28.211981  139599 cli_runner.go:164] Run: docker volume create force-systemd-flag-124615 --label name.minikube.sigs.k8s.io=force-systemd-flag-124615 --label created_by.minikube.sigs.k8s.io=true
	I0531 19:27:28.240486  139599 oci.go:103] Successfully created a docker volume force-systemd-flag-124615
	I0531 19:27:28.240563  139599 cli_runner.go:164] Run: docker run --rm --name force-systemd-flag-124615-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-flag-124615 --entrypoint /usr/bin/test -v force-systemd-flag-124615:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 -d /var/lib
	I0531 19:27:28.896228  139599 oci.go:107] Successfully prepared a docker volume force-systemd-flag-124615
	I0531 19:27:28.896273  139599 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime crio
	I0531 19:27:28.896291  139599 kic.go:190] Starting extracting preloaded images to volume ...
	I0531 19:27:28.896402  139599 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/16569-2389/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v force-systemd-flag-124615:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 -I lz4 -xf /preloaded.tar -C /extractDir
	
	* 
	* ==> CRI-O <==
	* May 31 19:27:14 pause-142925 crio[2567]: time="2023-05-31 19:27:14.643473439Z" level=info msg="Created container c160494d0bf8307482da0f4b11fa31b2602ac4c68688feea87f60330bab849a1: kube-system/coredns-5d78c9869d-pkjvx/coredns" id=8a45e786-c471-4a04-997a-7c5f0659e15f name=/runtime.v1.RuntimeService/CreateContainer
	May 31 19:27:14 pause-142925 crio[2567]: time="2023-05-31 19:27:14.643972522Z" level=info msg="Starting container: c160494d0bf8307482da0f4b11fa31b2602ac4c68688feea87f60330bab849a1" id=40b9bd52-71fd-4eae-a072-14b733239eef name=/runtime.v1.RuntimeService/StartContainer
	May 31 19:27:14 pause-142925 crio[2567]: time="2023-05-31 19:27:14.654947723Z" level=info msg="Started container" PID=3331 containerID=9273f7b7e15106f96991156699b30fa260e61482461e1588686d29e389333d78 description=kube-system/kindnet-tj2db/kindnet-cni id=78610f2a-91df-4cb0-a1e6-b62aff3b7cba name=/runtime.v1.RuntimeService/StartContainer sandboxID=452b7cd4d17bb888e8f8a8503975bd1a5a3ba14adddf5383d6de894a0413f28f
	May 31 19:27:14 pause-142925 crio[2567]: time="2023-05-31 19:27:14.673942984Z" level=info msg="Started container" PID=3355 containerID=c160494d0bf8307482da0f4b11fa31b2602ac4c68688feea87f60330bab849a1 description=kube-system/coredns-5d78c9869d-pkjvx/coredns id=40b9bd52-71fd-4eae-a072-14b733239eef name=/runtime.v1.RuntimeService/StartContainer sandboxID=16acc94ea4916f4eff611859730cd5bf5e8c05be2421a59e6e744d4040bb2529
	May 31 19:27:14 pause-142925 crio[2567]: time="2023-05-31 19:27:14.885806295Z" level=info msg="Created container c11065f5bc5427833241713211d2b26af3f39670c636150e5f6372db8a1aa6eb: kube-system/kube-proxy-hrhmq/kube-proxy" id=eb48e716-f355-484e-ba8c-afb0fc8b4546 name=/runtime.v1.RuntimeService/CreateContainer
	May 31 19:27:14 pause-142925 crio[2567]: time="2023-05-31 19:27:14.886493116Z" level=info msg="Starting container: c11065f5bc5427833241713211d2b26af3f39670c636150e5f6372db8a1aa6eb" id=fd5bceef-2977-4131-a346-c9359da36650 name=/runtime.v1.RuntimeService/StartContainer
	May 31 19:27:15 pause-142925 crio[2567]: time="2023-05-31 19:27:15.040400023Z" level=info msg="Started container" PID=3348 containerID=c11065f5bc5427833241713211d2b26af3f39670c636150e5f6372db8a1aa6eb description=kube-system/kube-proxy-hrhmq/kube-proxy id=fd5bceef-2977-4131-a346-c9359da36650 name=/runtime.v1.RuntimeService/StartContainer sandboxID=fb743709b673303bc91dca669507e1c2b76b82c343d4720bebf2c016c3cb1f79
	May 31 19:27:15 pause-142925 crio[2567]: time="2023-05-31 19:27:15.178972539Z" level=info msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist.temp\": CREATE"
	May 31 19:27:15 pause-142925 crio[2567]: time="2023-05-31 19:27:15.204564774Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	May 31 19:27:15 pause-142925 crio[2567]: time="2023-05-31 19:27:15.204601450Z" level=info msg="Updated default CNI network name to kindnet"
	May 31 19:27:15 pause-142925 crio[2567]: time="2023-05-31 19:27:15.204618959Z" level=info msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist.temp\": WRITE"
	May 31 19:27:15 pause-142925 crio[2567]: time="2023-05-31 19:27:15.216199318Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	May 31 19:27:15 pause-142925 crio[2567]: time="2023-05-31 19:27:15.216235748Z" level=info msg="Updated default CNI network name to kindnet"
	May 31 19:27:15 pause-142925 crio[2567]: time="2023-05-31 19:27:15.228531219Z" level=info msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist.temp\": WRITE"
	May 31 19:27:15 pause-142925 crio[2567]: time="2023-05-31 19:27:15.265009704Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	May 31 19:27:15 pause-142925 crio[2567]: time="2023-05-31 19:27:15.265044895Z" level=info msg="Updated default CNI network name to kindnet"
	May 31 19:27:15 pause-142925 crio[2567]: time="2023-05-31 19:27:15.265061420Z" level=info msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist.temp\": WRITE"
	May 31 19:27:15 pause-142925 crio[2567]: time="2023-05-31 19:27:15.290049310Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	May 31 19:27:15 pause-142925 crio[2567]: time="2023-05-31 19:27:15.290091409Z" level=info msg="Updated default CNI network name to kindnet"
	May 31 19:27:15 pause-142925 crio[2567]: time="2023-05-31 19:27:15.290113308Z" level=info msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist.temp\": RENAME"
	May 31 19:27:15 pause-142925 crio[2567]: time="2023-05-31 19:27:15.305852674Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	May 31 19:27:15 pause-142925 crio[2567]: time="2023-05-31 19:27:15.305893445Z" level=info msg="Updated default CNI network name to kindnet"
	May 31 19:27:15 pause-142925 crio[2567]: time="2023-05-31 19:27:15.305911323Z" level=info msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist\": CREATE"
	May 31 19:27:15 pause-142925 crio[2567]: time="2023-05-31 19:27:15.312942061Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	May 31 19:27:15 pause-142925 crio[2567]: time="2023-05-31 19:27:15.312996525Z" level=info msg="Updated default CNI network name to kindnet"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	c11065f5bc542       29921a084542255eb81a1a660a603b1a24636d88b202f9010daae75fa32754c0   19 seconds ago      Running             kube-proxy                2                   fb743709b6733       kube-proxy-hrhmq
	c160494d0bf83       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108   19 seconds ago      Running             coredns                   2                   16acc94ea4916       coredns-5d78c9869d-pkjvx
	9273f7b7e1510       b18bf71b941bae2e12db1c07e567ad14e4febbc778310a0fc64487f1ac877d79   19 seconds ago      Running             kindnet-cni               2                   452b7cd4d17bb       kindnet-tj2db
	f34040e109e2a       2ee705380c3c59a538b853cbe9ae9886ebbd0001a4cea4add5adeea48e5f48d4   25 seconds ago      Running             kube-controller-manager   2                   a91695c8fa141       kube-controller-manager-pause-142925
	85bfc1cda7f0b       72c9df6be7f1b997e4a31b5cb9aa7262e5278905af97e6a69e341e3f0f9bbaae   25 seconds ago      Running             kube-apiserver            2                   c3149f0c18fe1       kube-apiserver-pause-142925
	1a52edf685c10       24bc64e911039ecf00e263be2161797c758b7d82403ca5516ab64047a477f737   25 seconds ago      Running             etcd                      2                   178cab6c62ccf       etcd-pause-142925
	c659c1625e379       305d7ed1dae2877c3a80d434c5fb9f1aac1aa3d2431c36130a3fcd1970e93840   25 seconds ago      Running             kube-scheduler            3                   4c08723eca1fd       kube-scheduler-pause-142925
	b7eeb9e9931e5       305d7ed1dae2877c3a80d434c5fb9f1aac1aa3d2431c36130a3fcd1970e93840   30 seconds ago      Exited              kube-scheduler            2                   4c08723eca1fd       kube-scheduler-pause-142925
	53d9487d7e1b5       72c9df6be7f1b997e4a31b5cb9aa7262e5278905af97e6a69e341e3f0f9bbaae   41 seconds ago      Exited              kube-apiserver            1                   c3149f0c18fe1       kube-apiserver-pause-142925
	ebefff5a4557a       b18bf71b941bae2e12db1c07e567ad14e4febbc778310a0fc64487f1ac877d79   41 seconds ago      Exited              kindnet-cni               1                   452b7cd4d17bb       kindnet-tj2db
	aecfb452ce5e3       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108   41 seconds ago      Exited              coredns                   1                   16acc94ea4916       coredns-5d78c9869d-pkjvx
	478eaf89680be       29921a084542255eb81a1a660a603b1a24636d88b202f9010daae75fa32754c0   41 seconds ago      Exited              kube-proxy                1                   fb743709b6733       kube-proxy-hrhmq
	7ee1670fa5d41       24bc64e911039ecf00e263be2161797c758b7d82403ca5516ab64047a477f737   42 seconds ago      Exited              etcd                      1                   178cab6c62ccf       etcd-pause-142925
	ccff2afb9f0d7       2ee705380c3c59a538b853cbe9ae9886ebbd0001a4cea4add5adeea48e5f48d4   42 seconds ago      Exited              kube-controller-manager   1                   a91695c8fa141       kube-controller-manager-pause-142925
	
	* 
	* ==> coredns [aecfb452ce5e36e1f9750b8b696e43d37fe95157384a205ac6924bc948c6c429] <==
	* 
	* 
	* ==> coredns [c160494d0bf8307482da0f4b11fa31b2602ac4c68688feea87f60330bab849a1] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = b7aacdf6a6aa730aafe4d018cac9b7b5ecfb346cba84a99f64521f87aef8b4958639c1cf97967716465791d05bd38f372615327b7cb1d93c850bae532744d54d
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:60093 - 43985 "HINFO IN 5462344948435926723.8506925124705925349. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.01730594s
	
	* 
	* ==> describe nodes <==
	* Name:               pause-142925
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=pause-142925
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7022875d4a054c2d518e5e5a7b9d500799d50140
	                    minikube.k8s.io/name=pause-142925
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_05_31T19_25_57_0700
	                    minikube.k8s.io/version=v1.30.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 31 May 2023 19:25:52 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-142925
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 31 May 2023 19:27:34 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 31 May 2023 19:27:13 +0000   Wed, 31 May 2023 19:25:49 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 31 May 2023 19:27:13 +0000   Wed, 31 May 2023 19:25:49 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 31 May 2023 19:27:13 +0000   Wed, 31 May 2023 19:25:49 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 31 May 2023 19:27:13 +0000   Wed, 31 May 2023 19:26:39 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    pause-142925
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022624Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022624Ki
	  pods:               110
	System Info:
	  Machine ID:                 9bc224c0e0844d30b914477be348d6ab
	  System UUID:                d716a0cd-3b16-41b8-bb62-dfaa9f48779d
	  Boot ID:                    35429d0f-2ece-432d-a992-d9f8cda99d9c
	  Kernel Version:             5.15.0-1037-aws
	  OS Image:                   Ubuntu 22.04.2 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.5
	  Kubelet Version:            v1.27.2
	  Kube-Proxy Version:         v1.27.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5d78c9869d-pkjvx                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     86s
	  kube-system                 etcd-pause-142925                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         98s
	  kube-system                 kindnet-tj2db                           100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      86s
	  kube-system                 kube-apiserver-pause-142925             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         99s
	  kube-system                 kube-controller-manager-pause-142925    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         98s
	  kube-system                 kube-proxy-hrhmq                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         86s
	  kube-system                 kube-scheduler-pause-142925             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         101s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%!)(MISSING)  100m (5%!)(MISSING)
	  memory             220Mi (2%!)(MISSING)  220Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 84s                  kube-proxy       
	  Normal  Starting                 17s                  kube-proxy       
	  Normal  Starting                 107s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  107s (x8 over 107s)  kubelet          Node pause-142925 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    107s (x8 over 107s)  kubelet          Node pause-142925 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     107s (x8 over 107s)  kubelet          Node pause-142925 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    99s                  kubelet          Node pause-142925 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  99s                  kubelet          Node pause-142925 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     99s                  kubelet          Node pause-142925 status is now: NodeHasSufficientPID
	  Normal  Starting                 99s                  kubelet          Starting kubelet.
	  Normal  RegisteredNode           87s                  node-controller  Node pause-142925 event: Registered Node pause-142925 in Controller
	  Normal  NodeReady                55s                  kubelet          Node pause-142925 status is now: NodeReady
	  Normal  Starting                 27s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  26s (x8 over 26s)    kubelet          Node pause-142925 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    26s (x8 over 26s)    kubelet          Node pause-142925 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     26s (x8 over 26s)    kubelet          Node pause-142925 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           8s                   node-controller  Node pause-142925 event: Registered Node pause-142925 in Controller
	
	* 
	* ==> dmesg <==
	* [  +0.000741] FS-Cache: N-cookie c=0000000c [p=00000003 fl=2 nc=0 na=1]
	[  +0.000954] FS-Cache: N-cookie d=0000000077ffde6d{9p.inode} n=0000000074ec7799
	[  +0.001241] FS-Cache: N-key=[8] '915b3b0000000000'
	[  +0.003042] FS-Cache: Duplicate cookie detected
	[  +0.000777] FS-Cache: O-cookie c=00000006 [p=00000003 fl=226 nc=0 na=1]
	[  +0.000979] FS-Cache: O-cookie d=0000000077ffde6d{9p.inode} n=0000000031e1563a
	[  +0.001057] FS-Cache: O-key=[8] '915b3b0000000000'
	[  +0.000743] FS-Cache: N-cookie c=0000000d [p=00000003 fl=2 nc=0 na=1]
	[  +0.000985] FS-Cache: N-cookie d=0000000077ffde6d{9p.inode} n=000000007278ef73
	[  +0.001110] FS-Cache: N-key=[8] '915b3b0000000000'
	[  +2.905928] FS-Cache: Duplicate cookie detected
	[  +0.000862] FS-Cache: O-cookie c=00000004 [p=00000003 fl=226 nc=0 na=1]
	[  +0.001154] FS-Cache: O-cookie d=0000000077ffde6d{9p.inode} n=00000000ad00c953
	[  +0.001219] FS-Cache: O-key=[8] '905b3b0000000000'
	[  +0.000792] FS-Cache: N-cookie c=0000000f [p=00000003 fl=2 nc=0 na=1]
	[  +0.001108] FS-Cache: N-cookie d=0000000077ffde6d{9p.inode} n=00000000be9b4fe0
	[  +0.001229] FS-Cache: N-key=[8] '905b3b0000000000'
	[  +0.280333] FS-Cache: Duplicate cookie detected
	[  +0.000717] FS-Cache: O-cookie c=00000009 [p=00000003 fl=226 nc=0 na=1]
	[  +0.000979] FS-Cache: O-cookie d=0000000077ffde6d{9p.inode} n=000000003fd4f91a
	[  +0.001109] FS-Cache: O-key=[8] '985b3b0000000000'
	[  +0.000717] FS-Cache: N-cookie c=00000010 [p=00000003 fl=2 nc=0 na=1]
	[  +0.000957] FS-Cache: N-cookie d=0000000077ffde6d{9p.inode} n=0000000074ec7799
	[  +0.001067] FS-Cache: N-key=[8] '985b3b0000000000'
	[  +9.760834] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	
	* 
	* ==> etcd [1a52edf685c10d8c8996c6464f17bd55b67f89642dc7f2b0b0b72e885ce088fd] <==
	* {"level":"info","ts":"2023-05-31T19:27:09.187Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-05-31T19:27:09.186Z","caller":"etcdserver/server.go:754","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2023-05-31T19:27:09.187Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-05-31T19:27:09.194Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-05-31T19:27:09.194Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-05-31T19:27:09.173Z","caller":"embed/etcd.go:586","msg":"serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2023-05-31T19:27:09.195Z","caller":"embed/etcd.go:558","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2023-05-31T19:27:09.187Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 switched to configuration voters=(16896983918768216326)"}
	{"level":"info","ts":"2023-05-31T19:27:09.195Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","added-peer-id":"ea7e25599daad906","added-peer-peer-urls":["https://192.168.76.2:2380"]}
	{"level":"info","ts":"2023-05-31T19:27:09.195Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.5"}
	{"level":"info","ts":"2023-05-31T19:27:09.195Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-05-31T19:27:10.292Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 is starting a new election at term 2"}
	{"level":"info","ts":"2023-05-31T19:27:10.292Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became pre-candidate at term 2"}
	{"level":"info","ts":"2023-05-31T19:27:10.292Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2023-05-31T19:27:10.292Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became candidate at term 3"}
	{"level":"info","ts":"2023-05-31T19:27:10.292Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2023-05-31T19:27:10.292Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 3"}
	{"level":"info","ts":"2023-05-31T19:27:10.292Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2023-05-31T19:27:10.296Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:pause-142925 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2023-05-31T19:27:10.296Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-05-31T19:27:10.296Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-05-31T19:27:10.296Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-05-31T19:27:10.296Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-05-31T19:27:10.297Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-05-31T19:27:10.300Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"192.168.76.2:2379"}
	
	* 
	* ==> etcd [7ee1670fa5d41728bdabc13111b625c2ed395d205a748387291c212bf8c167df] <==
	* {"level":"warn","ts":"2023-05-31T19:26:53.110Z","caller":"flags/flag.go:93","msg":"unrecognized environment variable","environment-variable":"ETCD_UNSUPPORTED_ARCH=arm64"}
	{"level":"info","ts":"2023-05-31T19:26:53.112Z","caller":"etcdmain/etcd.go:73","msg":"Running: ","args":["etcd","--advertise-client-urls=https://192.168.76.2:2379","--cert-file=/var/lib/minikube/certs/etcd/server.crt","--client-cert-auth=true","--data-dir=/var/lib/minikube/etcd","--experimental-initial-corrupt-check=true","--experimental-watch-progress-notify-interval=5s","--initial-advertise-peer-urls=https://192.168.76.2:2380","--initial-cluster=pause-142925=https://192.168.76.2:2380","--key-file=/var/lib/minikube/certs/etcd/server.key","--listen-client-urls=https://127.0.0.1:2379,https://192.168.76.2:2379","--listen-metrics-urls=http://127.0.0.1:2381","--listen-peer-urls=https://192.168.76.2:2380","--name=pause-142925","--peer-cert-file=/var/lib/minikube/certs/etcd/peer.crt","--peer-client-cert-auth=true","--peer-key-file=/var/lib/minikube/certs/etcd/peer.key","--peer-trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt","--proxy-refresh-interval=70000","--snapshot-count=10000","--trusted-ca-file=/
var/lib/minikube/certs/etcd/ca.crt"]}
	{"level":"info","ts":"2023-05-31T19:26:53.119Z","caller":"etcdmain/etcd.go:116","msg":"server has been already initialized","data-dir":"/var/lib/minikube/etcd","dir-type":"member"}
	{"level":"info","ts":"2023-05-31T19:26:53.120Z","caller":"embed/etcd.go:124","msg":"configuring peer listeners","listen-peer-urls":["https://192.168.76.2:2380"]}
	{"level":"info","ts":"2023-05-31T19:26:53.120Z","caller":"embed/etcd.go:484","msg":"starting with peer TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/peer.crt, key = /var/lib/minikube/certs/etcd/peer.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-05-31T19:26:53.120Z","caller":"embed/etcd.go:132","msg":"configuring client listeners","listen-client-urls":["https://127.0.0.1:2379","https://192.168.76.2:2379"]}
	{"level":"info","ts":"2023-05-31T19:26:53.121Z","caller":"embed/etcd.go:306","msg":"starting an etcd server","etcd-version":"3.5.7","git-sha":"215b53cf3","go-version":"go1.17.13","go-os":"linux","go-arch":"arm64","max-cpu-set":2,"max-cpu-available":2,"member-initialized":true,"name":"pause-142925","data-dir":"/var/lib/minikube/etcd","wal-dir":"","wal-dir-dedicated":"","member-dir":"/var/lib/minikube/etcd/member","force-new-cluster":false,"heartbeat-interval":"100ms","election-timeout":"1s","initial-election-tick-advance":true,"snapshot-count":10000,"max-wals":5,"max-snapshots":5,"snapshot-catchup-entries":5000,"initial-advertise-peer-urls":["https://192.168.76.2:2380"],"listen-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.76.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"],"cors":["*"],"host-whitelist":["*"],"initial-cluster":"","initial-cluster-state":"new","initial-cluster-token"
:"","quota-backend-bytes":2147483648,"max-request-bytes":1572864,"max-concurrent-streams":4294967295,"pre-vote":true,"initial-corrupt-check":true,"corrupt-check-time-interval":"0s","compact-check-time-enabled":false,"compact-check-time-interval":"1m0s","auto-compaction-mode":"periodic","auto-compaction-retention":"0s","auto-compaction-interval":"0s","discovery-url":"","discovery-proxy":"","downgrade-check-interval":"5s"}
	{"level":"info","ts":"2023-05-31T19:26:53.150Z","caller":"etcdserver/backend.go:81","msg":"opened backend db","path":"/var/lib/minikube/etcd/member/snap/db","took":"5.91824ms"}
	{"level":"info","ts":"2023-05-31T19:26:53.219Z","caller":"etcdserver/server.go:530","msg":"No snapshot found. Recovering WAL from scratch!"}
	
	* 
	* ==> kernel <==
	*  19:27:35 up  1:09,  0 users,  load average: 3.10, 2.56, 1.93
	Linux pause-142925 5.15.0-1037-aws #41~20.04.1-Ubuntu SMP Mon May 22 18:20:20 UTC 2023 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.2 LTS"
	
	* 
	* ==> kindnet [9273f7b7e15106f96991156699b30fa260e61482461e1588686d29e389333d78] <==
	* I0531 19:27:14.736899       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0531 19:27:14.736985       1 main.go:107] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I0531 19:27:14.737110       1 main.go:116] setting mtu 1500 for CNI 
	I0531 19:27:14.737120       1 main.go:146] kindnetd IP family: "ipv4"
	I0531 19:27:14.737134       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0531 19:27:15.158632       1 main.go:223] Handling node with IPs: map[192.168.76.2:{}]
	I0531 19:27:15.158668       1 main.go:227] handling current node
	I0531 19:27:25.244972       1 main.go:223] Handling node with IPs: map[192.168.76.2:{}]
	I0531 19:27:25.245095       1 main.go:227] handling current node
	
	* 
	* ==> kindnet [ebefff5a4557aa51ef59f346ac5bd484e9300fed7b41c4dfd142d34f42af2a35] <==
	* I0531 19:26:52.987636       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0531 19:26:52.988115       1 main.go:107] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I0531 19:26:52.988287       1 main.go:116] setting mtu 1500 for CNI 
	I0531 19:26:52.988598       1 main.go:146] kindnetd IP family: "ipv4"
	I0531 19:26:52.988696       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	
	* 
	* ==> kube-apiserver [53d9487d7e1b556075195bfb0286d52f52853cdea8e81c656c257bcf3e6f01d3] <==
	* 
	* 
	* ==> kube-apiserver [85bfc1cda7f0b6431f5784df20c83004af310b71189266c271dd5ea12229a281] <==
	* I0531 19:27:13.171738       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0531 19:27:13.171809       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0531 19:27:13.171950       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0531 19:27:13.178345       1 available_controller.go:423] Starting AvailableConditionController
	I0531 19:27:13.178437       1 cache.go:32] Waiting for caches to sync for AvailableConditionController controller
	I0531 19:27:13.178526       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0531 19:27:13.186298       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0531 19:27:13.289261       1 shared_informer.go:311] Waiting for caches to sync for crd-autoregister
	I0531 19:27:13.600105       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0531 19:27:13.601766       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0531 19:27:13.613115       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0531 19:27:13.617519       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0531 19:27:13.679505       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0531 19:27:13.691906       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0531 19:27:13.692018       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0531 19:27:13.725134       1 cache.go:39] Caches are synced for autoregister controller
	I0531 19:27:13.729909       1 apf_controller.go:366] Running API Priority and Fairness config worker
	I0531 19:27:13.730017       1 apf_controller.go:369] Running API Priority and Fairness periodic rebalancing process
	I0531 19:27:13.730800       1 shared_informer.go:318] Caches are synced for configmaps
	I0531 19:27:14.226607       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0531 19:27:16.933986       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0531 19:27:17.156439       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0531 19:27:17.186940       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0531 19:27:17.289609       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0531 19:27:17.303164       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	
	* 
	* ==> kube-controller-manager [ccff2afb9f0d7350e36e7ec256289cccb2c5a8cfe9135602ab761d803ef779e5] <==
	* 
	* 
	* ==> kube-controller-manager [f34040e109e2a21d20611802eca9c1ae6345b0cf236a31620b512fb6742ff23b] <==
	* I0531 19:27:26.684014       1 taint_manager.go:211] "Sending events to api server"
	I0531 19:27:26.682465       1 event.go:307] "Event occurred" object="pause-142925" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node pause-142925 event: Registered Node pause-142925 in Controller"
	I0531 19:27:26.683944       1 shared_informer.go:318] Caches are synced for PV protection
	I0531 19:27:26.688258       1 shared_informer.go:318] Caches are synced for crt configmap
	I0531 19:27:26.704724       1 shared_informer.go:318] Caches are synced for deployment
	I0531 19:27:26.707741       1 shared_informer.go:318] Caches are synced for GC
	I0531 19:27:26.716259       1 shared_informer.go:318] Caches are synced for HPA
	I0531 19:27:26.716403       1 shared_informer.go:318] Caches are synced for daemon sets
	I0531 19:27:26.730504       1 shared_informer.go:318] Caches are synced for service account
	I0531 19:27:26.732011       1 shared_informer.go:318] Caches are synced for namespace
	I0531 19:27:26.751734       1 shared_informer.go:318] Caches are synced for attach detach
	I0531 19:27:26.768293       1 shared_informer.go:318] Caches are synced for ephemeral
	I0531 19:27:26.768311       1 shared_informer.go:318] Caches are synced for endpoint
	I0531 19:27:26.772882       1 shared_informer.go:318] Caches are synced for expand
	I0531 19:27:26.789012       1 shared_informer.go:318] Caches are synced for endpoint_slice_mirroring
	I0531 19:27:26.789120       1 shared_informer.go:318] Caches are synced for endpoint_slice
	I0531 19:27:26.802051       1 shared_informer.go:318] Caches are synced for stateful set
	I0531 19:27:26.833475       1 shared_informer.go:318] Caches are synced for disruption
	I0531 19:27:26.841834       1 shared_informer.go:318] Caches are synced for PVC protection
	I0531 19:27:26.843147       1 shared_informer.go:318] Caches are synced for persistent volume
	I0531 19:27:26.882423       1 shared_informer.go:318] Caches are synced for resource quota
	I0531 19:27:26.894025       1 shared_informer.go:318] Caches are synced for resource quota
	I0531 19:27:27.200457       1 shared_informer.go:318] Caches are synced for garbage collector
	I0531 19:27:27.200492       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I0531 19:27:27.283373       1 shared_informer.go:318] Caches are synced for garbage collector
	
	* 
	* ==> kube-proxy [478eaf89680befbce2bd6c495bd58bce3c8fc95be30d948c1b7a09dbeb666faf] <==
	* 
	* 
	* ==> kube-proxy [c11065f5bc5427833241713211d2b26af3f39670c636150e5f6372db8a1aa6eb] <==
	* I0531 19:27:16.745879       1 node.go:141] Successfully retrieved node IP: 192.168.76.2
	I0531 19:27:16.746139       1 server_others.go:110] "Detected node IP" address="192.168.76.2"
	I0531 19:27:16.746198       1 server_others.go:551] "Using iptables proxy"
	I0531 19:27:16.889673       1 server_others.go:190] "Using iptables Proxier"
	I0531 19:27:16.889774       1 server_others.go:197] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0531 19:27:16.889806       1 server_others.go:198] "Creating dualStackProxier for iptables"
	I0531 19:27:16.889850       1 server_others.go:481] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, defaulting to no-op detect-local for IPv6"
	I0531 19:27:16.889942       1 proxier.go:253] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0531 19:27:16.890540       1 server.go:657] "Version info" version="v1.27.2"
	I0531 19:27:16.896853       1 server.go:659] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0531 19:27:16.897843       1 config.go:188] "Starting service config controller"
	I0531 19:27:16.898075       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0531 19:27:16.908915       1 config.go:97] "Starting endpoint slice config controller"
	I0531 19:27:16.909006       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0531 19:27:16.909647       1 config.go:315] "Starting node config controller"
	I0531 19:27:16.909701       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0531 19:27:17.009572       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0531 19:27:17.009619       1 shared_informer.go:318] Caches are synced for service config
	I0531 19:27:17.011048       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [b7eeb9e9931e50ff612422f3fc028a906b4392080447dc3fb66403d47e63ac4c] <==
	* I0531 19:27:04.752543       1 serving.go:348] Generated self-signed cert in-memory
	W0531 19:27:05.604610       1 authentication.go:368] Error looking up in-cluster authentication configuration: Get "https://192.168.76.2:8443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp 192.168.76.2:8443: connect: connection refused
	W0531 19:27:05.604640       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0531 19:27:05.604647       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0531 19:27:05.608016       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.27.2"
	I0531 19:27:05.608057       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0531 19:27:05.609542       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0531 19:27:05.609637       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0531 19:27:05.609695       1 shared_informer.go:314] unable to sync caches for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0531 19:27:05.609730       1 configmap_cafile_content.go:210] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0531 19:27:05.610299       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0531 19:27:05.610415       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0531 19:27:05.610457       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0531 19:27:05.610577       1 secure_serving.go:255] Stopped listening on 127.0.0.1:10259
	E0531 19:27:05.610717       1 run.go:74] "command failed" err="finished without leader elect"
	
	* 
	* ==> kube-scheduler [c659c1625e37929308cc4ad220da1e17657ca23ffc4f9d6775ade9a8c8eb4d92] <==
	* I0531 19:27:11.813949       1 serving.go:348] Generated self-signed cert in-memory
	I0531 19:27:16.011346       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.27.2"
	I0531 19:27:16.011384       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0531 19:27:16.097240       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0531 19:27:16.097403       1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
	I0531 19:27:16.097425       1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController
	I0531 19:27:16.097459       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0531 19:27:16.097476       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0531 19:27:16.097491       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0531 19:27:16.097497       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0531 19:27:16.097518       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0531 19:27:16.198101       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0531 19:27:16.198231       1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController
	I0531 19:27:16.198338       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* May 31 19:27:08 pause-142925 kubelet[3078]: E0531 19:27:08.867692    3078 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	May 31 19:27:09 pause-142925 kubelet[3078]: I0531 19:27:09.543901    3078 kubelet_node_status.go:70] "Attempting to register node" node="pause-142925"
	May 31 19:27:13 pause-142925 kubelet[3078]: I0531 19:27:13.636605    3078 kubelet_node_status.go:108] "Node was previously registered" node="pause-142925"
	May 31 19:27:13 pause-142925 kubelet[3078]: I0531 19:27:13.636722    3078 kubelet_node_status.go:73] "Successfully registered node" node="pause-142925"
	May 31 19:27:13 pause-142925 kubelet[3078]: I0531 19:27:13.639279    3078 kuberuntime_manager.go:1460] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	May 31 19:27:13 pause-142925 kubelet[3078]: I0531 19:27:13.640018    3078 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	May 31 19:27:14 pause-142925 kubelet[3078]: I0531 19:27:14.023872    3078 apiserver.go:52] "Watching apiserver"
	May 31 19:27:14 pause-142925 kubelet[3078]: I0531 19:27:14.028232    3078 topology_manager.go:212] "Topology Admit Handler"
	May 31 19:27:14 pause-142925 kubelet[3078]: I0531 19:27:14.028343    3078 topology_manager.go:212] "Topology Admit Handler"
	May 31 19:27:14 pause-142925 kubelet[3078]: I0531 19:27:14.028413    3078 topology_manager.go:212] "Topology Admit Handler"
	May 31 19:27:14 pause-142925 kubelet[3078]: I0531 19:27:14.031105    3078 desired_state_of_world_populator.go:153] "Finished populating initial desired state of world"
	May 31 19:27:14 pause-142925 kubelet[3078]: I0531 19:27:14.068009    3078 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/41ccce12-4ef2-49e2-9bbd-a664a715e971-lib-modules\") pod \"kube-proxy-hrhmq\" (UID: \"41ccce12-4ef2-49e2-9bbd-a664a715e971\") " pod="kube-system/kube-proxy-hrhmq"
	May 31 19:27:14 pause-142925 kubelet[3078]: I0531 19:27:14.068060    3078 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/684be7a1-9260-4d7d-afe4-22eba3383872-config-volume\") pod \"coredns-5d78c9869d-pkjvx\" (UID: \"684be7a1-9260-4d7d-afe4-22eba3383872\") " pod="kube-system/coredns-5d78c9869d-pkjvx"
	May 31 19:27:14 pause-142925 kubelet[3078]: I0531 19:27:14.068093    3078 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/78b57b57-65bd-42d1-9c09-929951cdcb97-xtables-lock\") pod \"kindnet-tj2db\" (UID: \"78b57b57-65bd-42d1-9c09-929951cdcb97\") " pod="kube-system/kindnet-tj2db"
	May 31 19:27:14 pause-142925 kubelet[3078]: I0531 19:27:14.068118    3078 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q78pt\" (UniqueName: \"kubernetes.io/projected/41ccce12-4ef2-49e2-9bbd-a664a715e971-kube-api-access-q78pt\") pod \"kube-proxy-hrhmq\" (UID: \"41ccce12-4ef2-49e2-9bbd-a664a715e971\") " pod="kube-system/kube-proxy-hrhmq"
	May 31 19:27:14 pause-142925 kubelet[3078]: I0531 19:27:14.068143    3078 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wrtlz\" (UniqueName: \"kubernetes.io/projected/684be7a1-9260-4d7d-afe4-22eba3383872-kube-api-access-wrtlz\") pod \"coredns-5d78c9869d-pkjvx\" (UID: \"684be7a1-9260-4d7d-afe4-22eba3383872\") " pod="kube-system/coredns-5d78c9869d-pkjvx"
	May 31 19:27:14 pause-142925 kubelet[3078]: I0531 19:27:14.068166    3078 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/41ccce12-4ef2-49e2-9bbd-a664a715e971-xtables-lock\") pod \"kube-proxy-hrhmq\" (UID: \"41ccce12-4ef2-49e2-9bbd-a664a715e971\") " pod="kube-system/kube-proxy-hrhmq"
	May 31 19:27:14 pause-142925 kubelet[3078]: I0531 19:27:14.068188    3078 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/78b57b57-65bd-42d1-9c09-929951cdcb97-cni-cfg\") pod \"kindnet-tj2db\" (UID: \"78b57b57-65bd-42d1-9c09-929951cdcb97\") " pod="kube-system/kindnet-tj2db"
	May 31 19:27:14 pause-142925 kubelet[3078]: I0531 19:27:14.068210    3078 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/78b57b57-65bd-42d1-9c09-929951cdcb97-lib-modules\") pod \"kindnet-tj2db\" (UID: \"78b57b57-65bd-42d1-9c09-929951cdcb97\") " pod="kube-system/kindnet-tj2db"
	May 31 19:27:14 pause-142925 kubelet[3078]: I0531 19:27:14.068242    3078 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qgfnl\" (UniqueName: \"kubernetes.io/projected/78b57b57-65bd-42d1-9c09-929951cdcb97-kube-api-access-qgfnl\") pod \"kindnet-tj2db\" (UID: \"78b57b57-65bd-42d1-9c09-929951cdcb97\") " pod="kube-system/kindnet-tj2db"
	May 31 19:27:14 pause-142925 kubelet[3078]: I0531 19:27:14.068266    3078 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/41ccce12-4ef2-49e2-9bbd-a664a715e971-kube-proxy\") pod \"kube-proxy-hrhmq\" (UID: \"41ccce12-4ef2-49e2-9bbd-a664a715e971\") " pod="kube-system/kube-proxy-hrhmq"
	May 31 19:27:14 pause-142925 kubelet[3078]: I0531 19:27:14.068281    3078 reconciler.go:41] "Reconciler: start to sync state"
	May 31 19:27:14 pause-142925 kubelet[3078]: I0531 19:27:14.328953    3078 scope.go:115] "RemoveContainer" containerID="ebefff5a4557aa51ef59f346ac5bd484e9300fed7b41c4dfd142d34f42af2a35"
	May 31 19:27:14 pause-142925 kubelet[3078]: I0531 19:27:14.331177    3078 scope.go:115] "RemoveContainer" containerID="aecfb452ce5e36e1f9750b8b696e43d37fe95157384a205ac6924bc948c6c429"
	May 31 19:27:14 pause-142925 kubelet[3078]: I0531 19:27:14.331499    3078 scope.go:115] "RemoveContainer" containerID="478eaf89680befbce2bd6c495bd58bce3c8fc95be30d948c1b7a09dbeb666faf"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-142925 -n pause-142925
helpers_test.go:261: (dbg) Run:  kubectl --context pause-142925 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/SecondStartNoReconfiguration (53.63s)

                                                
                                    

Test pass (259/296)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 17.68
4 TestDownloadOnly/v1.16.0/preload-exists 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.37
10 TestDownloadOnly/v1.27.2/json-events 10.67
11 TestDownloadOnly/v1.27.2/preload-exists 0
15 TestDownloadOnly/v1.27.2/LogsDuration 0.09
16 TestDownloadOnly/DeleteAll 0.34
17 TestDownloadOnly/DeleteAlwaysSucceeds 0.17
19 TestBinaryMirror 0.61
22 TestAddons/Setup 151.09
26 TestAddons/parallel/InspektorGadget 10.58
27 TestAddons/parallel/MetricsServer 5.59
30 TestAddons/parallel/CSI 52.75
31 TestAddons/parallel/Headlamp 13.6
32 TestAddons/parallel/CloudSpanner 5.53
35 TestAddons/serial/GCPAuth/Namespaces 0.21
36 TestAddons/StoppedEnableDisable 12.27
37 TestCertOptions 36.34
38 TestCertExpiration 261.52
40 TestForceSystemdFlag 39.94
41 TestForceSystemdEnv 43.23
46 TestErrorSpam/setup 31.71
47 TestErrorSpam/start 0.82
48 TestErrorSpam/status 1.06
49 TestErrorSpam/pause 1.8
50 TestErrorSpam/unpause 2.01
51 TestErrorSpam/stop 1.46
54 TestFunctional/serial/CopySyncFile 0
55 TestFunctional/serial/StartWithProxy 77.32
56 TestFunctional/serial/AuditLog 0
57 TestFunctional/serial/SoftStart 54.94
58 TestFunctional/serial/KubeContext 0.06
59 TestFunctional/serial/KubectlGetPods 0.1
62 TestFunctional/serial/CacheCmd/cache/add_remote 4
63 TestFunctional/serial/CacheCmd/cache/add_local 1.02
64 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
65 TestFunctional/serial/CacheCmd/cache/list 0.05
66 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.32
67 TestFunctional/serial/CacheCmd/cache/cache_reload 2.15
68 TestFunctional/serial/CacheCmd/cache/delete 0.11
69 TestFunctional/serial/MinikubeKubectlCmd 0.14
70 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.14
71 TestFunctional/serial/ExtraConfig 32.44
72 TestFunctional/serial/ComponentHealth 0.12
73 TestFunctional/serial/LogsCmd 1.82
74 TestFunctional/serial/LogsFileCmd 1.84
76 TestFunctional/parallel/ConfigCmd 0.43
77 TestFunctional/parallel/DashboardCmd 11.05
78 TestFunctional/parallel/DryRun 0.72
79 TestFunctional/parallel/InternationalLanguage 0.28
80 TestFunctional/parallel/StatusCmd 1.06
84 TestFunctional/parallel/ServiceCmdConnect 8.69
85 TestFunctional/parallel/AddonsCmd 0.19
86 TestFunctional/parallel/PersistentVolumeClaim 28.47
88 TestFunctional/parallel/SSHCmd 0.67
89 TestFunctional/parallel/CpCmd 1.49
91 TestFunctional/parallel/FileSync 0.41
92 TestFunctional/parallel/CertSync 2.13
96 TestFunctional/parallel/NodeLabels 0.11
98 TestFunctional/parallel/NonActiveRuntimeDisabled 0.75
100 TestFunctional/parallel/License 0.33
101 TestFunctional/parallel/Version/short 0.08
102 TestFunctional/parallel/Version/components 0.79
103 TestFunctional/parallel/ImageCommands/ImageListShort 0.32
104 TestFunctional/parallel/ImageCommands/ImageListTable 0.28
105 TestFunctional/parallel/ImageCommands/ImageListJson 0.33
106 TestFunctional/parallel/ImageCommands/ImageListYaml 0.33
107 TestFunctional/parallel/ImageCommands/ImageBuild 5.17
108 TestFunctional/parallel/ImageCommands/Setup 1.81
109 TestFunctional/parallel/UpdateContextCmd/no_changes 0.26
110 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.28
111 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.23
112 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 6.16
113 TestFunctional/parallel/ServiceCmd/DeployApp 11.43
114 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 2.99
115 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 6.15
116 TestFunctional/parallel/ServiceCmd/List 0.46
117 TestFunctional/parallel/ServiceCmd/JSONOutput 0.47
118 TestFunctional/parallel/ServiceCmd/HTTPS 0.44
119 TestFunctional/parallel/ServiceCmd/Format 0.57
120 TestFunctional/parallel/ServiceCmd/URL 0.61
121 TestFunctional/parallel/ImageCommands/ImageSaveToFile 1.05
123 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.85
124 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
125 TestFunctional/parallel/ImageCommands/ImageRemove 0.79
127 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 10.61
128 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.88
129 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 3.14
130 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.08
131 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
135 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
136 TestFunctional/parallel/ProfileCmd/profile_not_create 0.4
137 TestFunctional/parallel/ProfileCmd/profile_list 0.4
138 TestFunctional/parallel/ProfileCmd/profile_json_output 0.38
139 TestFunctional/parallel/MountCmd/any-port 8.37
140 TestFunctional/parallel/MountCmd/specific-port 2.19
141 TestFunctional/parallel/MountCmd/VerifyCleanup 1.91
142 TestFunctional/delete_addon-resizer_images 0.08
143 TestFunctional/delete_my-image_image 0.02
144 TestFunctional/delete_minikube_cached_images 0.02
148 TestIngressAddonLegacy/StartLegacyK8sCluster 117.75
150 TestIngressAddonLegacy/serial/ValidateIngressAddonActivation 11.26
151 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 0.53
155 TestJSONOutput/start/Command 78.67
156 TestJSONOutput/start/Audit 0
158 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
159 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
161 TestJSONOutput/pause/Command 0.8
162 TestJSONOutput/pause/Audit 0
164 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
165 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
167 TestJSONOutput/unpause/Command 0.72
168 TestJSONOutput/unpause/Audit 0
170 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
171 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
173 TestJSONOutput/stop/Command 5.86
174 TestJSONOutput/stop/Audit 0
176 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
177 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
178 TestErrorJSONOutput 0.23
180 TestKicCustomNetwork/create_custom_network 56.71
181 TestKicCustomNetwork/use_default_bridge_network 36.85
182 TestKicExistingNetwork 33.18
183 TestKicCustomSubnet 34.67
184 TestKicStaticIP 38.97
185 TestMainNoArgs 0.05
186 TestMinikubeProfile 68.26
189 TestMountStart/serial/StartWithMountFirst 7.84
190 TestMountStart/serial/VerifyMountFirst 0.28
191 TestMountStart/serial/StartWithMountSecond 6.7
192 TestMountStart/serial/VerifyMountSecond 0.26
193 TestMountStart/serial/DeleteFirst 1.74
194 TestMountStart/serial/VerifyMountPostDelete 0.27
195 TestMountStart/serial/Stop 1.23
196 TestMountStart/serial/RestartStopped 7.97
197 TestMountStart/serial/VerifyMountPostStop 0.27
200 TestMultiNode/serial/FreshStart2Nodes 69.59
201 TestMultiNode/serial/DeployApp2Nodes 6.94
203 TestMultiNode/serial/AddNode 46.62
204 TestMultiNode/serial/ProfileList 0.35
205 TestMultiNode/serial/CopyFile 10.46
206 TestMultiNode/serial/StopNode 2.35
207 TestMultiNode/serial/StartAfterStop 12.41
208 TestMultiNode/serial/RestartKeepsNodes 117.54
209 TestMultiNode/serial/DeleteNode 5.13
210 TestMultiNode/serial/StopMultiNode 24
211 TestMultiNode/serial/RestartMultiNode 80.08
212 TestMultiNode/serial/ValidateNameConflict 39.5
219 TestScheduledStopUnix 112.14
222 TestInsufficientStorage 10.63
225 TestKubernetesUpgrade 386.78
228 TestNoKubernetes/serial/StartNoK8sWithVersion 0.09
229 TestNoKubernetes/serial/StartWithK8s 36.48
230 TestNoKubernetes/serial/StartWithStopK8s 6.64
231 TestNoKubernetes/serial/Start 8.41
232 TestNoKubernetes/serial/VerifyK8sNotRunning 0.29
233 TestNoKubernetes/serial/ProfileList 0.6
234 TestNoKubernetes/serial/Stop 1.24
235 TestNoKubernetes/serial/StartNoArgs 6.96
236 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.27
237 TestStoppedBinaryUpgrade/Setup 1.15
239 TestStoppedBinaryUpgrade/MinikubeLogs 0.72
248 TestPause/serial/Start 76.63
257 TestNetworkPlugins/group/false 4.56
262 TestStartStop/group/old-k8s-version/serial/FirstStart 125.55
263 TestStartStop/group/old-k8s-version/serial/DeployApp 9.56
264 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.85
265 TestStartStop/group/old-k8s-version/serial/Stop 12.06
266 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.17
267 TestStartStop/group/old-k8s-version/serial/SecondStart 459.19
269 TestStartStop/group/no-preload/serial/FirstStart 63.4
270 TestStartStop/group/no-preload/serial/DeployApp 8.48
271 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.08
272 TestStartStop/group/no-preload/serial/Stop 12.13
273 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.16
274 TestStartStop/group/no-preload/serial/SecondStart 599.97
275 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 5.03
276 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.1
277 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.38
278 TestStartStop/group/old-k8s-version/serial/Pause 3.88
280 TestStartStop/group/embed-certs/serial/FirstStart 87.7
281 TestStartStop/group/embed-certs/serial/DeployApp 9.58
282 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.01
283 TestStartStop/group/embed-certs/serial/Stop 12.12
284 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.16
285 TestStartStop/group/embed-certs/serial/SecondStart 623.57
286 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 5.03
287 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.12
288 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.33
289 TestStartStop/group/no-preload/serial/Pause 3.39
291 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 78.53
292 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 10.49
293 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.03
294 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.11
295 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.18
296 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 626.51
297 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 5.03
298 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.11
299 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.37
300 TestStartStop/group/embed-certs/serial/Pause 3.32
302 TestStartStop/group/newest-cni/serial/FirstStart 46.07
303 TestStartStop/group/newest-cni/serial/DeployApp 0
304 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.96
305 TestStartStop/group/newest-cni/serial/Stop 1.26
306 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.16
307 TestStartStop/group/newest-cni/serial/SecondStart 31.96
308 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
309 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
310 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.37
311 TestStartStop/group/newest-cni/serial/Pause 3.15
312 TestNetworkPlugins/group/auto/Start 79.85
313 TestNetworkPlugins/group/auto/KubeletFlags 0.33
314 TestNetworkPlugins/group/auto/NetCatPod 9.38
315 TestNetworkPlugins/group/auto/DNS 0.22
316 TestNetworkPlugins/group/auto/Localhost 0.21
317 TestNetworkPlugins/group/auto/HairPin 0.18
318 TestNetworkPlugins/group/kindnet/Start 77.5
319 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 5.03
320 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.11
321 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.33
322 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.28
323 TestNetworkPlugins/group/calico/Start 71.81
324 TestNetworkPlugins/group/kindnet/ControllerPod 5.04
325 TestNetworkPlugins/group/kindnet/KubeletFlags 0.36
326 TestNetworkPlugins/group/kindnet/NetCatPod 13.56
327 TestNetworkPlugins/group/kindnet/DNS 0.33
328 TestNetworkPlugins/group/kindnet/Localhost 0.35
329 TestNetworkPlugins/group/kindnet/HairPin 0.31
330 TestNetworkPlugins/group/custom-flannel/Start 71.18
331 TestNetworkPlugins/group/calico/ControllerPod 5.06
332 TestNetworkPlugins/group/calico/KubeletFlags 0.49
333 TestNetworkPlugins/group/calico/NetCatPod 13.71
334 TestNetworkPlugins/group/calico/DNS 0.29
335 TestNetworkPlugins/group/calico/Localhost 0.22
336 TestNetworkPlugins/group/calico/HairPin 0.24
337 TestNetworkPlugins/group/enable-default-cni/Start 91.27
338 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.36
339 TestNetworkPlugins/group/custom-flannel/NetCatPod 11.5
340 TestNetworkPlugins/group/custom-flannel/DNS 0.27
341 TestNetworkPlugins/group/custom-flannel/Localhost 0.23
342 TestNetworkPlugins/group/custom-flannel/HairPin 0.24
343 TestNetworkPlugins/group/flannel/Start 71.68
344 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.38
345 TestNetworkPlugins/group/enable-default-cni/NetCatPod 12.62
346 TestNetworkPlugins/group/enable-default-cni/DNS 0.22
347 TestNetworkPlugins/group/enable-default-cni/Localhost 0.2
348 TestNetworkPlugins/group/enable-default-cni/HairPin 0.24
349 TestNetworkPlugins/group/flannel/ControllerPod 5.03
350 TestNetworkPlugins/group/flannel/KubeletFlags 0.34
351 TestNetworkPlugins/group/flannel/NetCatPod 12.52
352 TestNetworkPlugins/group/bridge/Start 92.74
353 TestNetworkPlugins/group/flannel/DNS 0.25
354 TestNetworkPlugins/group/flannel/Localhost 0.19
355 TestNetworkPlugins/group/flannel/HairPin 0.18
356 TestNetworkPlugins/group/bridge/KubeletFlags 0.29
357 TestNetworkPlugins/group/bridge/NetCatPod 10.35
358 TestNetworkPlugins/group/bridge/DNS 0.22
359 TestNetworkPlugins/group/bridge/Localhost 0.19
360 TestNetworkPlugins/group/bridge/HairPin 0.19
x
+
TestDownloadOnly/v1.16.0/json-events (17.68s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-924367 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-924367 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (17.684088669s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (17.68s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.37s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:169: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-924367
aaa_download_only_test.go:169: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-924367: exit status 85 (366.612522ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-924367 | jenkins | v1.30.1 | 31 May 23 18:44 UTC |          |
	|         | -p download-only-924367        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/05/31 18:44:03
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.20.4 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0531 18:44:03.455065    7809 out.go:296] Setting OutFile to fd 1 ...
	I0531 18:44:03.455600    7809 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0531 18:44:03.455639    7809 out.go:309] Setting ErrFile to fd 2...
	I0531 18:44:03.455663    7809 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0531 18:44:03.455940    7809 root.go:336] Updating PATH: /home/jenkins/minikube-integration/16569-2389/.minikube/bin
	W0531 18:44:03.456143    7809 root.go:312] Error reading config file at /home/jenkins/minikube-integration/16569-2389/.minikube/config/config.json: open /home/jenkins/minikube-integration/16569-2389/.minikube/config/config.json: no such file or directory
	I0531 18:44:03.456745    7809 out.go:303] Setting JSON to true
	I0531 18:44:03.457559    7809 start.go:127] hostinfo: {"hostname":"ip-172-31-21-244","uptime":1589,"bootTime":1685557055,"procs":149,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1037-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0531 18:44:03.457671    7809 start.go:137] virtualization:  
	I0531 18:44:03.462773    7809 out.go:97] [download-only-924367] minikube v1.30.1 on Ubuntu 20.04 (arm64)
	I0531 18:44:03.464998    7809 out.go:169] MINIKUBE_LOCATION=16569
	W0531 18:44:03.463053    7809 preload.go:295] Failed to list preload files: open /home/jenkins/minikube-integration/16569-2389/.minikube/cache/preloaded-tarball: no such file or directory
	I0531 18:44:03.463108    7809 notify.go:220] Checking for updates...
	I0531 18:44:03.466931    7809 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0531 18:44:03.468957    7809 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/16569-2389/kubeconfig
	I0531 18:44:03.470725    7809 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/16569-2389/.minikube
	I0531 18:44:03.472622    7809 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0531 18:44:03.475809    7809 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0531 18:44:03.476093    7809 driver.go:375] Setting default libvirt URI to qemu:///system
	I0531 18:44:03.500296    7809 docker.go:121] docker version: linux-24.0.2:Docker Engine - Community
	I0531 18:44:03.500381    7809 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0531 18:44:03.795009    7809 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:42 SystemTime:2023-05-31 18:44:03.784924868 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1037-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215166976 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.5] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.18.1]] Warnings:<nil>}}
	I0531 18:44:03.795122    7809 docker.go:294] overlay module found
	I0531 18:44:03.797410    7809 out.go:97] Using the docker driver based on user configuration
	I0531 18:44:03.797451    7809 start.go:297] selected driver: docker
	I0531 18:44:03.797458    7809 start.go:875] validating driver "docker" against <nil>
	I0531 18:44:03.797571    7809 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0531 18:44:03.863394    7809 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:42 SystemTime:2023-05-31 18:44:03.853875111 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1037-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215166976 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.5] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.18.1]] Warnings:<nil>}}
	I0531 18:44:03.863554    7809 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0531 18:44:03.863852    7809 start_flags.go:382] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0531 18:44:03.864013    7809 start_flags.go:901] Wait components to verify : map[apiserver:true system_pods:true]
	I0531 18:44:03.866295    7809 out.go:169] Using Docker driver with root privileges
	I0531 18:44:03.867744    7809 cni.go:84] Creating CNI manager for ""
	I0531 18:44:03.867765    7809 cni.go:142] "docker" driver + "crio" runtime found, recommending kindnet
	I0531 18:44:03.867782    7809 start_flags.go:314] Found "CNI" CNI - setting NetworkPlugin=cni
	I0531 18:44:03.867797    7809 start_flags.go:319] config:
	{Name:download-only-924367 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-924367 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0531 18:44:03.870054    7809 out.go:97] Starting control plane node download-only-924367 in cluster download-only-924367
	I0531 18:44:03.870122    7809 cache.go:122] Beginning downloading kic base image for docker with crio
	I0531 18:44:03.872032    7809 out.go:97] Pulling base image ...
	I0531 18:44:03.872063    7809 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I0531 18:44:03.872226    7809 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 in local docker daemon
	I0531 18:44:03.890658    7809 cache.go:150] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 to local cache
	I0531 18:44:03.890836    7809 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 in local cache directory
	I0531 18:44:03.890954    7809 image.go:118] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 to local cache
	I0531 18:44:03.947098    7809 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-arm64.tar.lz4
	I0531 18:44:03.947127    7809 cache.go:57] Caching tarball of preloaded images
	I0531 18:44:03.947294    7809 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I0531 18:44:03.949590    7809 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I0531 18:44:03.949621    7809 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-arm64.tar.lz4 ...
	I0531 18:44:04.073785    7809 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-arm64.tar.lz4?checksum=md5:743cd3b7071469270e4dbdc0d89badaa -> /home/jenkins/minikube-integration/16569-2389/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-arm64.tar.lz4
	I0531 18:44:09.317910    7809 cache.go:153] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 as a tarball
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-924367"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:170: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.37s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.2/json-events (10.67s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.2/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-924367 --force --alsologtostderr --kubernetes-version=v1.27.2 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-924367 --force --alsologtostderr --kubernetes-version=v1.27.2 --container-runtime=crio --driver=docker  --container-runtime=crio: (10.668100522s)
--- PASS: TestDownloadOnly/v1.27.2/json-events (10.67s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.2/preload-exists
--- PASS: TestDownloadOnly/v1.27.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.2/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.2/LogsDuration
aaa_download_only_test.go:169: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-924367
aaa_download_only_test.go:169: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-924367: exit status 85 (86.099114ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-924367 | jenkins | v1.30.1 | 31 May 23 18:44 UTC |          |
	|         | -p download-only-924367        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	| start   | -o=json --download-only        | download-only-924367 | jenkins | v1.30.1 | 31 May 23 18:44 UTC |          |
	|         | -p download-only-924367        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.27.2   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/05/31 18:44:21
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.20.4 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0531 18:44:21.509345    7888 out.go:296] Setting OutFile to fd 1 ...
	I0531 18:44:21.509520    7888 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0531 18:44:21.509529    7888 out.go:309] Setting ErrFile to fd 2...
	I0531 18:44:21.509534    7888 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0531 18:44:21.509697    7888 root.go:336] Updating PATH: /home/jenkins/minikube-integration/16569-2389/.minikube/bin
	W0531 18:44:21.509830    7888 root.go:312] Error reading config file at /home/jenkins/minikube-integration/16569-2389/.minikube/config/config.json: open /home/jenkins/minikube-integration/16569-2389/.minikube/config/config.json: no such file or directory
	I0531 18:44:21.510120    7888 out.go:303] Setting JSON to true
	I0531 18:44:21.510887    7888 start.go:127] hostinfo: {"hostname":"ip-172-31-21-244","uptime":1607,"bootTime":1685557055,"procs":146,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1037-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0531 18:44:21.510954    7888 start.go:137] virtualization:  
	I0531 18:44:21.540075    7888 out.go:97] [download-only-924367] minikube v1.30.1 on Ubuntu 20.04 (arm64)
	I0531 18:44:21.573307    7888 out.go:169] MINIKUBE_LOCATION=16569
	I0531 18:44:21.540350    7888 notify.go:220] Checking for updates...
	I0531 18:44:21.636188    7888 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0531 18:44:21.657777    7888 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/16569-2389/kubeconfig
	I0531 18:44:21.686921    7888 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/16569-2389/.minikube
	I0531 18:44:21.718564    7888 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0531 18:44:21.764519    7888 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0531 18:44:21.765121    7888 config.go:182] Loaded profile config "download-only-924367": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	W0531 18:44:21.765169    7888 start.go:783] api.Load failed for download-only-924367: filestore "download-only-924367": Docker machine "download-only-924367" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0531 18:44:21.765311    7888 driver.go:375] Setting default libvirt URI to qemu:///system
	W0531 18:44:21.765337    7888 start.go:783] api.Load failed for download-only-924367: filestore "download-only-924367": Docker machine "download-only-924367" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0531 18:44:21.789067    7888 docker.go:121] docker version: linux-24.0.2:Docker Engine - Community
	I0531 18:44:21.789158    7888 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0531 18:44:21.877003    7888 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:38 SystemTime:2023-05-31 18:44:21.867007607 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1037-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215166976 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.5] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.18.1]] Warnings:<nil>}}
	I0531 18:44:21.877111    7888 docker.go:294] overlay module found
	I0531 18:44:21.893662    7888 out.go:97] Using the docker driver based on existing profile
	I0531 18:44:21.893710    7888 start.go:297] selected driver: docker
	I0531 18:44:21.893718    7888 start.go:875] validating driver "docker" against &{Name:download-only-924367 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-924367 Namespace:default APIServerName:
minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP:}
	I0531 18:44:21.893909    7888 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0531 18:44:21.975225    7888 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:38 SystemTime:2023-05-31 18:44:21.965214786 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1037-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215166976 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.5] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.18.1]] Warnings:<nil>}}
	I0531 18:44:21.975669    7888 cni.go:84] Creating CNI manager for ""
	I0531 18:44:21.975689    7888 cni.go:142] "docker" driver + "crio" runtime found, recommending kindnet
	I0531 18:44:21.975698    7888 start_flags.go:319] config:
	{Name:download-only-924367 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:download-only-924367 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0531 18:44:21.984734    7888 out.go:97] Starting control plane node download-only-924367 in cluster download-only-924367
	I0531 18:44:21.984790    7888 cache.go:122] Beginning downloading kic base image for docker with crio
	I0531 18:44:21.992693    7888 out.go:97] Pulling base image ...
	I0531 18:44:21.992731    7888 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime crio
	I0531 18:44:21.992796    7888 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 in local docker daemon
	I0531 18:44:22.012865    7888 cache.go:150] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 to local cache
	I0531 18:44:22.012992    7888 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 in local cache directory
	I0531 18:44:22.013009    7888 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 in local cache directory, skipping pull
	I0531 18:44:22.013013    7888 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 exists in cache, skipping pull
	I0531 18:44:22.013020    7888 cache.go:153] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 as a tarball
	I0531 18:44:22.065176    7888 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.27.2/preloaded-images-k8s-v18-v1.27.2-cri-o-overlay-arm64.tar.lz4
	I0531 18:44:22.065202    7888 cache.go:57] Caching tarball of preloaded images
	I0531 18:44:22.065386    7888 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime crio
	I0531 18:44:22.072778    7888 out.go:97] Downloading Kubernetes v1.27.2 preload ...
	I0531 18:44:22.072812    7888 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.27.2-cri-o-overlay-arm64.tar.lz4 ...
	I0531 18:44:22.228929    7888 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.27.2/preloaded-images-k8s-v18-v1.27.2-cri-o-overlay-arm64.tar.lz4?checksum=md5:47dde9e158811a13dd0ed9ce5ff7e1c2 -> /home/jenkins/minikube-integration/16569-2389/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-cri-o-overlay-arm64.tar.lz4
	I0531 18:44:29.961906    7888 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.27.2-cri-o-overlay-arm64.tar.lz4 ...
	I0531 18:44:29.962010    7888 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/16569-2389/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-cri-o-overlay-arm64.tar.lz4 ...
	I0531 18:44:30.779817    7888 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.2 on crio
	I0531 18:44:30.779960    7888 profile.go:148] Saving config to /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/download-only-924367/config.json ...
	I0531 18:44:30.780184    7888 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime crio
	I0531 18:44:30.780404    7888 download.go:107] Downloading: https://dl.k8s.io/release/v1.27.2/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.27.2/bin/linux/arm64/kubectl.sha256 -> /home/jenkins/minikube-integration/16569-2389/.minikube/cache/linux/arm64/v1.27.2/kubectl
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-924367"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:170: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.27.2/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (0.34s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:187: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/DeleteAll (0.34s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.17s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:199: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-924367
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.17s)

                                                
                                    
x
+
TestBinaryMirror (0.61s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:304: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-781489 --alsologtostderr --binary-mirror http://127.0.0.1:45143 --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-781489" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-781489
--- PASS: TestBinaryMirror (0.61s)

                                                
                                    
x
+
TestAddons/Setup (151.09s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:88: (dbg) Run:  out/minikube-linux-arm64 start -p addons-748280 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns
addons_test.go:88: (dbg) Done: out/minikube-linux-arm64 start -p addons-748280 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns: (2m31.086294082s)
--- PASS: TestAddons/Setup (151.09s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.58s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:814: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-2s2kr" [f449301b-1f4a-4441-8285-5204a2712ed5] Running
addons_test.go:814: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.008992853s
addons_test.go:817: (dbg) Run:  out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-748280
2023/05/31 18:47:23 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2023/05/31 18:47:23 [DEBUG] GET http://192.168.49.2:5000: retrying in 4s (2 left)
addons_test.go:817: (dbg) Done: out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-748280: (5.566868717s)
--- PASS: TestAddons/parallel/InspektorGadget (10.58s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.59s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:383: metrics-server stabilized in 4.207588ms
addons_test.go:385: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-844d8db974-vjh5j" [af201e0f-457a-4fb5-91e6-f01fdfaa6868] Running
addons_test.go:385: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.010517064s
addons_test.go:391: (dbg) Run:  kubectl --context addons-748280 top pods -n kube-system
addons_test.go:408: (dbg) Run:  out/minikube-linux-arm64 -p addons-748280 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.59s)

                                                
                                    
x
+
TestAddons/parallel/CSI (52.75s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:537: csi-hostpath-driver pods stabilized in 20.522003ms
addons_test.go:540: (dbg) Run:  kubectl --context addons-748280 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:545: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-748280 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-748280 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-748280 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-748280 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-748280 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-748280 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-748280 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-748280 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:550: (dbg) Run:  kubectl --context addons-748280 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:555: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [56674f4a-cc7d-413f-8bd9-683f21f19e7d] Pending
helpers_test.go:344: "task-pv-pod" [56674f4a-cc7d-413f-8bd9-683f21f19e7d] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [56674f4a-cc7d-413f-8bd9-683f21f19e7d] Running
addons_test.go:555: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 12.015075171s
addons_test.go:560: (dbg) Run:  kubectl --context addons-748280 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:565: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-748280 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-748280 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:570: (dbg) Run:  kubectl --context addons-748280 delete pod task-pv-pod
addons_test.go:576: (dbg) Run:  kubectl --context addons-748280 delete pvc hpvc
addons_test.go:582: (dbg) Run:  kubectl --context addons-748280 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:587: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-748280 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-748280 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-748280 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-748280 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-748280 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-748280 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-748280 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-748280 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-748280 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-748280 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-748280 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-748280 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-748280 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-748280 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:592: (dbg) Run:  kubectl --context addons-748280 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:597: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [96272374-9fdb-4c1b-b3a2-3ba78c506c89] Pending
helpers_test.go:344: "task-pv-pod-restore" [96272374-9fdb-4c1b-b3a2-3ba78c506c89] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [96272374-9fdb-4c1b-b3a2-3ba78c506c89] Running
addons_test.go:597: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.01585088s
addons_test.go:602: (dbg) Run:  kubectl --context addons-748280 delete pod task-pv-pod-restore
addons_test.go:602: (dbg) Done: kubectl --context addons-748280 delete pod task-pv-pod-restore: (1.158869817s)
addons_test.go:606: (dbg) Run:  kubectl --context addons-748280 delete pvc hpvc-restore
addons_test.go:610: (dbg) Run:  kubectl --context addons-748280 delete volumesnapshot new-snapshot-demo
addons_test.go:614: (dbg) Run:  out/minikube-linux-arm64 -p addons-748280 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:614: (dbg) Done: out/minikube-linux-arm64 -p addons-748280 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.605057333s)
addons_test.go:618: (dbg) Run:  out/minikube-linux-arm64 -p addons-748280 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (52.75s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (13.6s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:800: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-748280 --alsologtostderr -v=1
addons_test.go:800: (dbg) Done: out/minikube-linux-arm64 addons enable headlamp -p addons-748280 --alsologtostderr -v=1: (1.587203891s)
addons_test.go:805: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-6b5756787-lfcfq" [771a8ad0-5b37-47af-b551-8afdf8a2e4bc] Pending
helpers_test.go:344: "headlamp-6b5756787-lfcfq" [771a8ad0-5b37-47af-b551-8afdf8a2e4bc] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-6b5756787-lfcfq" [771a8ad0-5b37-47af-b551-8afdf8a2e4bc] Running / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-6b5756787-lfcfq" [771a8ad0-5b37-47af-b551-8afdf8a2e4bc] Running
addons_test.go:805: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 12.012941319s
--- PASS: TestAddons/parallel/Headlamp (13.60s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.53s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:833: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-6964794569-dkz26" [9c6b0885-ecde-4422-b892-da46dc53fbb2] Running
addons_test.go:833: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.012482028s
addons_test.go:836: (dbg) Run:  out/minikube-linux-arm64 addons disable cloud-spanner -p addons-748280
--- PASS: TestAddons/parallel/CloudSpanner (5.53s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.21s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:626: (dbg) Run:  kubectl --context addons-748280 create ns new-namespace
addons_test.go:640: (dbg) Run:  kubectl --context addons-748280 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.21s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.27s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:148: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-748280
addons_test.go:148: (dbg) Done: out/minikube-linux-arm64 stop -p addons-748280: (12.055445999s)
addons_test.go:152: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-748280
addons_test.go:156: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-748280
addons_test.go:161: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-748280
--- PASS: TestAddons/StoppedEnableDisable (12.27s)

                                                
                                    
x
+
TestCertOptions (36.34s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-066838 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-066838 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (33.662664244s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-066838 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-066838 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-066838 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-066838" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-066838
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-066838: (1.996170721s)
--- PASS: TestCertOptions (36.34s)

                                                
                                    
x
+
TestCertExpiration (261.52s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-193721 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio
E0531 19:28:31.868043    7804 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/ingress-addon-legacy-546551/client.crt: no such file or directory
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-193721 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio: (40.557718261s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-193721 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio
E0531 19:32:05.253237    7804 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/addons-748280/client.crt: no such file or directory
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-193721 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (38.430825637s)
helpers_test.go:175: Cleaning up "cert-expiration-193721" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-193721
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-193721: (2.5263446s)
--- PASS: TestCertExpiration (261.52s)

                                                
                                    
x
+
TestForceSystemdFlag (39.94s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:85: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-124615 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:85: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-124615 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (37.024399866s)
docker_test.go:126: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-124615 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-124615" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-124615
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-124615: (2.562025718s)
--- PASS: TestForceSystemdFlag (39.94s)

                                                
                                    
x
+
TestForceSystemdEnv (43.23s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:149: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-322303 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:149: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-322303 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (40.599110746s)
helpers_test.go:175: Cleaning up "force-systemd-env-322303" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-322303
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-322303: (2.630944924s)
--- PASS: TestForceSystemdEnv (43.23s)

                                                
                                    
x
+
TestErrorSpam/setup (31.71s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-616621 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-616621 --driver=docker  --container-runtime=crio
E0531 18:52:05.253860    7804 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/addons-748280/client.crt: no such file or directory
E0531 18:52:05.261985    7804 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/addons-748280/client.crt: no such file or directory
E0531 18:52:05.272221    7804 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/addons-748280/client.crt: no such file or directory
E0531 18:52:05.293361    7804 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/addons-748280/client.crt: no such file or directory
E0531 18:52:05.333623    7804 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/addons-748280/client.crt: no such file or directory
E0531 18:52:05.413894    7804 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/addons-748280/client.crt: no such file or directory
E0531 18:52:05.574282    7804 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/addons-748280/client.crt: no such file or directory
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-616621 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-616621 --driver=docker  --container-runtime=crio: (31.706014873s)
--- PASS: TestErrorSpam/setup (31.71s)

                                                
                                    
x
+
TestErrorSpam/start (0.82s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-616621 --log_dir /tmp/nospam-616621 start --dry-run
E0531 18:52:05.895337    7804 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/addons-748280/client.crt: no such file or directory
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-616621 --log_dir /tmp/nospam-616621 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-616621 --log_dir /tmp/nospam-616621 start --dry-run
E0531 18:52:06.536270    7804 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/addons-748280/client.crt: no such file or directory
--- PASS: TestErrorSpam/start (0.82s)

                                                
                                    
x
+
TestErrorSpam/status (1.06s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-616621 --log_dir /tmp/nospam-616621 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-616621 --log_dir /tmp/nospam-616621 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-616621 --log_dir /tmp/nospam-616621 status
--- PASS: TestErrorSpam/status (1.06s)

                                                
                                    
x
+
TestErrorSpam/pause (1.8s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-616621 --log_dir /tmp/nospam-616621 pause
E0531 18:52:07.817241    7804 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/addons-748280/client.crt: no such file or directory
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-616621 --log_dir /tmp/nospam-616621 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-616621 --log_dir /tmp/nospam-616621 pause
--- PASS: TestErrorSpam/pause (1.80s)

                                                
                                    
x
+
TestErrorSpam/unpause (2.01s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-616621 --log_dir /tmp/nospam-616621 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-616621 --log_dir /tmp/nospam-616621 unpause
E0531 18:52:10.377703    7804 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/addons-748280/client.crt: no such file or directory
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-616621 --log_dir /tmp/nospam-616621 unpause
--- PASS: TestErrorSpam/unpause (2.01s)

                                                
                                    
x
+
TestErrorSpam/stop (1.46s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-616621 --log_dir /tmp/nospam-616621 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-arm64 -p nospam-616621 --log_dir /tmp/nospam-616621 stop: (1.258795291s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-616621 --log_dir /tmp/nospam-616621 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-616621 --log_dir /tmp/nospam-616621 stop
--- PASS: TestErrorSpam/stop (1.46s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1850: local sync path: /home/jenkins/minikube-integration/16569-2389/.minikube/files/etc/test/nested/copy/7804/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (77.32s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2229: (dbg) Run:  out/minikube-linux-arm64 start -p functional-747104 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
E0531 18:52:25.739106    7804 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/addons-748280/client.crt: no such file or directory
E0531 18:52:46.220098    7804 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/addons-748280/client.crt: no such file or directory
E0531 18:53:27.180865    7804 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/addons-748280/client.crt: no such file or directory
functional_test.go:2229: (dbg) Done: out/minikube-linux-arm64 start -p functional-747104 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (1m17.322887636s)
--- PASS: TestFunctional/serial/StartWithProxy (77.32s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (54.94s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:654: (dbg) Run:  out/minikube-linux-arm64 start -p functional-747104 --alsologtostderr -v=8
functional_test.go:654: (dbg) Done: out/minikube-linux-arm64 start -p functional-747104 --alsologtostderr -v=8: (54.940407684s)
functional_test.go:658: soft start took 54.943902651s for "functional-747104" cluster.
--- PASS: TestFunctional/serial/SoftStart (54.94s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:676: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:691: (dbg) Run:  kubectl --context functional-747104 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.10s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (4s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1044: (dbg) Run:  out/minikube-linux-arm64 -p functional-747104 cache add registry.k8s.io/pause:3.1
functional_test.go:1044: (dbg) Done: out/minikube-linux-arm64 -p functional-747104 cache add registry.k8s.io/pause:3.1: (1.303111959s)
functional_test.go:1044: (dbg) Run:  out/minikube-linux-arm64 -p functional-747104 cache add registry.k8s.io/pause:3.3
functional_test.go:1044: (dbg) Done: out/minikube-linux-arm64 -p functional-747104 cache add registry.k8s.io/pause:3.3: (1.379678535s)
functional_test.go:1044: (dbg) Run:  out/minikube-linux-arm64 -p functional-747104 cache add registry.k8s.io/pause:latest
functional_test.go:1044: (dbg) Done: out/minikube-linux-arm64 -p functional-747104 cache add registry.k8s.io/pause:latest: (1.317969727s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (4.00s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.02s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1072: (dbg) Run:  docker build -t minikube-local-cache-test:functional-747104 /tmp/TestFunctionalserialCacheCmdcacheadd_local4152498906/001
functional_test.go:1084: (dbg) Run:  out/minikube-linux-arm64 -p functional-747104 cache add minikube-local-cache-test:functional-747104
functional_test.go:1089: (dbg) Run:  out/minikube-linux-arm64 -p functional-747104 cache delete minikube-local-cache-test:functional-747104
functional_test.go:1078: (dbg) Run:  docker rmi minikube-local-cache-test:functional-747104
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.02s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1097: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1105: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.32s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1119: (dbg) Run:  out/minikube-linux-arm64 -p functional-747104 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.32s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (2.15s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1142: (dbg) Run:  out/minikube-linux-arm64 -p functional-747104 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1148: (dbg) Run:  out/minikube-linux-arm64 -p functional-747104 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1148: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-747104 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (309.165645ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1153: (dbg) Run:  out/minikube-linux-arm64 -p functional-747104 cache reload
functional_test.go:1153: (dbg) Done: out/minikube-linux-arm64 -p functional-747104 cache reload: (1.196836242s)
functional_test.go:1158: (dbg) Run:  out/minikube-linux-arm64 -p functional-747104 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (2.15s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1167: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1167: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:711: (dbg) Run:  out/minikube-linux-arm64 -p functional-747104 kubectl -- --context functional-747104 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:736: (dbg) Run:  out/kubectl --context functional-747104 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.14s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (32.44s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:752: (dbg) Run:  out/minikube-linux-arm64 start -p functional-747104 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0531 18:54:49.102098    7804 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/addons-748280/client.crt: no such file or directory
functional_test.go:752: (dbg) Done: out/minikube-linux-arm64 start -p functional-747104 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (32.437881702s)
functional_test.go:756: restart took 32.437978119s for "functional-747104" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (32.44s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:805: (dbg) Run:  kubectl --context functional-747104 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:820: etcd phase: Running
functional_test.go:830: etcd status: Ready
functional_test.go:820: kube-apiserver phase: Running
functional_test.go:830: kube-apiserver status: Ready
functional_test.go:820: kube-controller-manager phase: Running
functional_test.go:830: kube-controller-manager status: Ready
functional_test.go:820: kube-scheduler phase: Running
functional_test.go:830: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.12s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.82s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1231: (dbg) Run:  out/minikube-linux-arm64 -p functional-747104 logs
functional_test.go:1231: (dbg) Done: out/minikube-linux-arm64 -p functional-747104 logs: (1.81990177s)
--- PASS: TestFunctional/serial/LogsCmd (1.82s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.84s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1245: (dbg) Run:  out/minikube-linux-arm64 -p functional-747104 logs --file /tmp/TestFunctionalserialLogsFileCmd2210223568/001/logs.txt
functional_test.go:1245: (dbg) Done: out/minikube-linux-arm64 -p functional-747104 logs --file /tmp/TestFunctionalserialLogsFileCmd2210223568/001/logs.txt: (1.843374978s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.84s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1194: (dbg) Run:  out/minikube-linux-arm64 -p functional-747104 config unset cpus
functional_test.go:1194: (dbg) Run:  out/minikube-linux-arm64 -p functional-747104 config get cpus
functional_test.go:1194: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-747104 config get cpus: exit status 14 (74.750125ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1194: (dbg) Run:  out/minikube-linux-arm64 -p functional-747104 config set cpus 2
functional_test.go:1194: (dbg) Run:  out/minikube-linux-arm64 -p functional-747104 config get cpus
functional_test.go:1194: (dbg) Run:  out/minikube-linux-arm64 -p functional-747104 config unset cpus
functional_test.go:1194: (dbg) Run:  out/minikube-linux-arm64 -p functional-747104 config get cpus
functional_test.go:1194: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-747104 config get cpus: exit status 14 (64.457019ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (11.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:900: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-747104 --alsologtostderr -v=1]
functional_test.go:905: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-747104 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 34492: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (11.05s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:969: (dbg) Run:  out/minikube-linux-arm64 start -p functional-747104 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:969: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-747104 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (311.000898ms)

                                                
                                                
-- stdout --
	* [functional-747104] minikube v1.30.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=16569
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/16569-2389/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/16569-2389/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0531 18:56:08.007156   33968 out.go:296] Setting OutFile to fd 1 ...
	I0531 18:56:08.007360   33968 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0531 18:56:08.007373   33968 out.go:309] Setting ErrFile to fd 2...
	I0531 18:56:08.007379   33968 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0531 18:56:08.007618   33968 root.go:336] Updating PATH: /home/jenkins/minikube-integration/16569-2389/.minikube/bin
	I0531 18:56:08.008590   33968 out.go:303] Setting JSON to false
	I0531 18:56:08.009678   33968 start.go:127] hostinfo: {"hostname":"ip-172-31-21-244","uptime":2313,"bootTime":1685557055,"procs":317,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1037-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0531 18:56:08.009775   33968 start.go:137] virtualization:  
	I0531 18:56:08.018198   33968 out.go:177] * [functional-747104] minikube v1.30.1 on Ubuntu 20.04 (arm64)
	I0531 18:56:08.020260   33968 out.go:177]   - MINIKUBE_LOCATION=16569
	I0531 18:56:08.022023   33968 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0531 18:56:08.020398   33968 notify.go:220] Checking for updates...
	I0531 18:56:08.026484   33968 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/16569-2389/kubeconfig
	I0531 18:56:08.029883   33968 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/16569-2389/.minikube
	I0531 18:56:08.031887   33968 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0531 18:56:08.042659   33968 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0531 18:56:08.049650   33968 config.go:182] Loaded profile config "functional-747104": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.2
	I0531 18:56:08.050242   33968 driver.go:375] Setting default libvirt URI to qemu:///system
	I0531 18:56:08.076047   33968 docker.go:121] docker version: linux-24.0.2:Docker Engine - Community
	I0531 18:56:08.076151   33968 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0531 18:56:08.231085   33968 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:31 OomKillDisable:true NGoroutines:46 SystemTime:2023-05-31 18:56:08.216793936 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1037-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215166976 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.5] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.18.1]] Warnings:<nil>}}
	I0531 18:56:08.231205   33968 docker.go:294] overlay module found
	I0531 18:56:08.233402   33968 out.go:177] * Using the docker driver based on existing profile
	I0531 18:56:08.235632   33968 start.go:297] selected driver: docker
	I0531 18:56:08.235652   33968 start.go:875] validating driver "docker" against &{Name:functional-747104 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:functional-747104 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.27.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L
MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0531 18:56:08.235743   33968 start.go:886] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0531 18:56:08.239193   33968 out.go:177] 
	W0531 18:56:08.241336   33968 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0531 18:56:08.243798   33968 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:986: (dbg) Run:  out/minikube-linux-arm64 start -p functional-747104 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.72s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1015: (dbg) Run:  out/minikube-linux-arm64 start -p functional-747104 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1015: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-747104 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (280.881683ms)

                                                
                                                
-- stdout --
	* [functional-747104] minikube v1.30.1 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=16569
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/16569-2389/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/16569-2389/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0531 18:56:08.123283   33978 out.go:296] Setting OutFile to fd 1 ...
	I0531 18:56:08.123978   33978 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0531 18:56:08.123995   33978 out.go:309] Setting ErrFile to fd 2...
	I0531 18:56:08.124003   33978 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0531 18:56:08.124439   33978 root.go:336] Updating PATH: /home/jenkins/minikube-integration/16569-2389/.minikube/bin
	I0531 18:56:08.125014   33978 out.go:303] Setting JSON to false
	I0531 18:56:08.126767   33978 start.go:127] hostinfo: {"hostname":"ip-172-31-21-244","uptime":2313,"bootTime":1685557055,"procs":311,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1037-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0531 18:56:08.126847   33978 start.go:137] virtualization:  
	I0531 18:56:08.129774   33978 out.go:177] * [functional-747104] minikube v1.30.1 sur Ubuntu 20.04 (arm64)
	I0531 18:56:08.139444   33978 out.go:177]   - MINIKUBE_LOCATION=16569
	I0531 18:56:08.141621   33978 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0531 18:56:08.139791   33978 notify.go:220] Checking for updates...
	I0531 18:56:08.146014   33978 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/16569-2389/kubeconfig
	I0531 18:56:08.155087   33978 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/16569-2389/.minikube
	I0531 18:56:08.162831   33978 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0531 18:56:08.165174   33978 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0531 18:56:08.171157   33978 config.go:182] Loaded profile config "functional-747104": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.2
	I0531 18:56:08.171763   33978 driver.go:375] Setting default libvirt URI to qemu:///system
	I0531 18:56:08.214927   33978 docker.go:121] docker version: linux-24.0.2:Docker Engine - Community
	I0531 18:56:08.215018   33978 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0531 18:56:08.309043   33978 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:31 OomKillDisable:true NGoroutines:46 SystemTime:2023-05-31 18:56:08.298479778 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1037-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215166976 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.5] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.18.1]] Warnings:<nil>}}
	I0531 18:56:08.309151   33978 docker.go:294] overlay module found
	I0531 18:56:08.311544   33978 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0531 18:56:08.313849   33978 start.go:297] selected driver: docker
	I0531 18:56:08.313872   33978 start.go:875] validating driver "docker" against &{Name:functional-747104 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:functional-747104 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.27.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L
MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0531 18:56:08.313984   33978 start.go:886] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0531 18:56:08.316805   33978 out.go:177] 
	W0531 18:56:08.318933   33978 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0531 18:56:08.321136   33978 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:849: (dbg) Run:  out/minikube-linux-arm64 -p functional-747104 status
functional_test.go:855: (dbg) Run:  out/minikube-linux-arm64 -p functional-747104 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:867: (dbg) Run:  out/minikube-linux-arm64 -p functional-747104 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.06s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (8.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1625: (dbg) Run:  kubectl --context functional-747104 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1633: (dbg) Run:  kubectl --context functional-747104 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1638: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-58d66798bb-pb468" [9c25bad8-53c7-44a0-bcff-f4572770216f] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-58d66798bb-pb468" [9c25bad8-53c7-44a0-bcff-f4572770216f] Running
functional_test.go:1638: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 8.010924089s
functional_test.go:1647: (dbg) Run:  out/minikube-linux-arm64 -p functional-747104 service hello-node-connect --url
functional_test.go:1653: found endpoint for hello-node-connect: http://192.168.49.2:31440
functional_test.go:1673: http://192.168.49.2:31440: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-58d66798bb-pb468

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:31440
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (8.69s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1688: (dbg) Run:  out/minikube-linux-arm64 -p functional-747104 addons list
functional_test.go:1700: (dbg) Run:  out/minikube-linux-arm64 -p functional-747104 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (28.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [67d22013-675b-4b19-b59e-d036f32f8c47] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.011716032s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-747104 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-747104 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-747104 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-747104 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [548ec177-3a5c-467e-9f82-2e7e5f78c7bd] Pending
helpers_test.go:344: "sp-pod" [548ec177-3a5c-467e-9f82-2e7e5f78c7bd] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [548ec177-3a5c-467e-9f82-2e7e5f78c7bd] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 11.017855481s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-747104 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-747104 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-747104 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [65092491-14c8-4489-a596-1ba14175fbb3] Pending
helpers_test.go:344: "sp-pod" [65092491-14c8-4489-a596-1ba14175fbb3] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [65092491-14c8-4489-a596-1ba14175fbb3] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 10.023445272s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-747104 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (28.47s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1723: (dbg) Run:  out/minikube-linux-arm64 -p functional-747104 ssh "echo hello"
functional_test.go:1740: (dbg) Run:  out/minikube-linux-arm64 -p functional-747104 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.67s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-747104 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-747104 ssh -n functional-747104 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-747104 cp functional-747104:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd4123969448/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-747104 ssh -n functional-747104 "sudo cat /home/docker/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.49s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1924: Checking for existence of /etc/test/nested/copy/7804/hosts within VM
functional_test.go:1926: (dbg) Run:  out/minikube-linux-arm64 -p functional-747104 ssh "sudo cat /etc/test/nested/copy/7804/hosts"
functional_test.go:1931: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1967: Checking for existence of /etc/ssl/certs/7804.pem within VM
functional_test.go:1968: (dbg) Run:  out/minikube-linux-arm64 -p functional-747104 ssh "sudo cat /etc/ssl/certs/7804.pem"
functional_test.go:1967: Checking for existence of /usr/share/ca-certificates/7804.pem within VM
functional_test.go:1968: (dbg) Run:  out/minikube-linux-arm64 -p functional-747104 ssh "sudo cat /usr/share/ca-certificates/7804.pem"
functional_test.go:1967: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1968: (dbg) Run:  out/minikube-linux-arm64 -p functional-747104 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1994: Checking for existence of /etc/ssl/certs/78042.pem within VM
functional_test.go:1995: (dbg) Run:  out/minikube-linux-arm64 -p functional-747104 ssh "sudo cat /etc/ssl/certs/78042.pem"
functional_test.go:1994: Checking for existence of /usr/share/ca-certificates/78042.pem within VM
functional_test.go:1995: (dbg) Run:  out/minikube-linux-arm64 -p functional-747104 ssh "sudo cat /usr/share/ca-certificates/78042.pem"
functional_test.go:1994: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1995: (dbg) Run:  out/minikube-linux-arm64 -p functional-747104 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.13s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:217: (dbg) Run:  kubectl --context functional-747104 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2022: (dbg) Run:  out/minikube-linux-arm64 -p functional-747104 ssh "sudo systemctl is-active docker"
functional_test.go:2022: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-747104 ssh "sudo systemctl is-active docker": exit status 1 (382.53898ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2022: (dbg) Run:  out/minikube-linux-arm64 -p functional-747104 ssh "sudo systemctl is-active containerd"
functional_test.go:2022: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-747104 ssh "sudo systemctl is-active containerd": exit status 1 (370.536362ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.75s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2283: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2251: (dbg) Run:  out/minikube-linux-arm64 -p functional-747104 version --short
--- PASS: TestFunctional/parallel/Version/short (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2265: (dbg) Run:  out/minikube-linux-arm64 -p functional-747104 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.79s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:259: (dbg) Run:  out/minikube-linux-arm64 -p functional-747104 image ls --format short --alsologtostderr
functional_test.go:264: (dbg) Stdout: out/minikube-linux-arm64 -p functional-747104 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.27.2
registry.k8s.io/kube-proxy:v1.27.2
registry.k8s.io/kube-controller-manager:v1.27.2
registry.k8s.io/kube-apiserver:v1.27.2
registry.k8s.io/etcd:3.5.7-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.10.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-747104
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/kindest/kindnetd:v20230511-dc714da8
functional_test.go:267: (dbg) Stderr: out/minikube-linux-arm64 -p functional-747104 image ls --format short --alsologtostderr:
I0531 18:56:10.362408   34457 out.go:296] Setting OutFile to fd 1 ...
I0531 18:56:10.362631   34457 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0531 18:56:10.362651   34457 out.go:309] Setting ErrFile to fd 2...
I0531 18:56:10.362670   34457 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0531 18:56:10.362863   34457 root.go:336] Updating PATH: /home/jenkins/minikube-integration/16569-2389/.minikube/bin
I0531 18:56:10.363489   34457 config.go:182] Loaded profile config "functional-747104": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.2
I0531 18:56:10.363661   34457 config.go:182] Loaded profile config "functional-747104": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.2
I0531 18:56:10.364199   34457 cli_runner.go:164] Run: docker container inspect functional-747104 --format={{.State.Status}}
I0531 18:56:10.384086   34457 ssh_runner.go:195] Run: systemctl --version
I0531 18:56:10.384144   34457 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-747104
I0531 18:56:10.424395   34457 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32782 SSHKeyPath:/home/jenkins/minikube-integration/16569-2389/.minikube/machines/functional-747104/id_rsa Username:docker}
I0531 18:56:10.545080   34457 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:259: (dbg) Run:  out/minikube-linux-arm64 -p functional-747104 image ls --format table --alsologtostderr
functional_test.go:264: (dbg) Stdout: out/minikube-linux-arm64 -p functional-747104 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| registry.k8s.io/pause                   | latest             | 8cb2091f603e7 | 246kB  |
| docker.io/kindest/kindnetd              | v20230511-dc714da8 | b18bf71b941ba | 60.9MB |
| docker.io/library/nginx                 | latest             | c42efe0b54387 | 140MB  |
| gcr.io/google-containers/addon-resizer  | functional-747104  | ffd4cfbbe753e | 34.1MB |
| gcr.io/k8s-minikube/busybox             | latest             | 71a676dd070f4 | 1.63MB |
| registry.k8s.io/echoserver-arm          | 1.8                | 72565bf5bbedf | 87.5MB |
| registry.k8s.io/etcd                    | 3.5.7-0            | 24bc64e911039 | 182MB  |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 1611cd07b61d5 | 3.77MB |
| registry.k8s.io/coredns/coredns         | v1.10.1            | 97e04611ad434 | 51.4MB |
| registry.k8s.io/kube-apiserver          | v1.27.2            | 72c9df6be7f1b | 116MB  |
| registry.k8s.io/pause                   | 3.1                | 8057e0500773a | 529kB  |
| localhost/my-image                      | functional-747104  | 10ceccbd1e6fe | 1.64MB |
| registry.k8s.io/kube-scheduler          | v1.27.2            | 305d7ed1dae28 | 57.6MB |
| registry.k8s.io/pause                   | 3.3                | 3d18732f8686c | 487kB  |
| registry.k8s.io/pause                   | 3.9                | 829e9de338bd5 | 520kB  |
| docker.io/library/nginx                 | alpine             | 5ee47dcca7543 | 42.8MB |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | ba04bb24b9575 | 29MB   |
| registry.k8s.io/kube-controller-manager | v1.27.2            | 2ee705380c3c5 | 109MB  |
| registry.k8s.io/kube-proxy              | v1.27.2            | 29921a0845422 | 68.1MB |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:267: (dbg) Stderr: out/minikube-linux-arm64 -p functional-747104 image ls --format table --alsologtostderr:
I0531 18:56:16.522274   34874 out.go:296] Setting OutFile to fd 1 ...
I0531 18:56:16.522781   34874 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0531 18:56:16.522792   34874 out.go:309] Setting ErrFile to fd 2...
I0531 18:56:16.522799   34874 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0531 18:56:16.523139   34874 root.go:336] Updating PATH: /home/jenkins/minikube-integration/16569-2389/.minikube/bin
I0531 18:56:16.524217   34874 config.go:182] Loaded profile config "functional-747104": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.2
I0531 18:56:16.524356   34874 config.go:182] Loaded profile config "functional-747104": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.2
I0531 18:56:16.525068   34874 cli_runner.go:164] Run: docker container inspect functional-747104 --format={{.State.Status}}
I0531 18:56:16.553290   34874 ssh_runner.go:195] Run: systemctl --version
I0531 18:56:16.553340   34874 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-747104
I0531 18:56:16.579545   34874 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32782 SSHKeyPath:/home/jenkins/minikube-integration/16569-2389/.minikube/machines/functional-747104/id_rsa Username:docker}
I0531 18:56:16.680529   34874 ssh_runner.go:195] Run: sudo crictl images --output json
2023/05/31 18:56:19 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:259: (dbg) Run:  out/minikube-linux-arm64 -p functional-747104 image ls --format json --alsologtostderr
functional_test.go:264: (dbg) Stdout: out/minikube-linux-arm64 -p functional-747104 image ls --format json --alsologtostderr:
[{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29037500"},{"id":"10ceccbd1e6fea5c0a1b27d7288c4c1ee4d62cb5bf3d7fcaaa508e72d5cf5a03","repoDigests":["localhost/my-image@sha256:43b800b8aec0ea83f2b807b8f7454d0387cd321e3c04a4dbe30a11d3212a1e16"],"repoTags":["localhost/my-image:functional-747104"],"size":"1640226"},{"id":"72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":["registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5"],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"size":"87536549"},{"id":"2ee705380c3c59a538b853cbe9ae9886ebbd0001a4cea4add5adeea48e5f48d4","repoDigests":["registry.k8s.io/k
ube-controller-manager@sha256:6626c27b7df41d86340a701121792c5c0dc40ca8877c23478fc5659103bc7505","registry.k8s.io/kube-controller-manager@sha256:b0990ef7c9ce9edd0f57355a7e4cb43a71e864bfd2cd55bc68e4998e00213b56"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.27.2"],"size":"108667702"},{"id":"829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e","repoDigests":["registry.k8s.io/pause@sha256:3ec98b8452dc8ae265a6917dfb81587ac78849e520d5dbba6de524851d20eca6","registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097"],"repoTags":["registry.k8s.io/pause:3.9"],"size":"520014"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":["registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca"],"repoTags":["registry.k8s.io/pause:latest"],"size":"246070"},{"id":"c42efe0b54387756e68d167a437aef21451f63eebd9330bb555367d67128386c","repoDigests":["docker.io/library/nginx@sha256:0bb91b50c42bc6677acff40ea
0f050b655c5c2cc1311e783097a04061191340b","docker.io/library/nginx@sha256:af296b188c7b7df99ba960ca614439c99cb7cf252ed7bbc23e90cfda59092305"],"repoTags":["docker.io/library/nginx:latest"],"size":"139751562"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3774172"},{"id":"5ee47dcca7543750b3941b52e98f103bbbae9aaf574ab4dc018e1e7d34e505ad","repoDigests":["docker.io/library/nginx@sha256:203cba3f56d7dba1d66b95c091db65a4f0778eb5d16e76151e73e0413e317328","docker.io/library/nginx@sha256:2e776a66a3556f001aba13431b26e448fe8acba277bf93d2ab1a785571a46d90"],"repoTags":["docker.io/library/nginx:alpine"],"size":"42810437"},{"id":"97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108","repoDigests":["registry.k8s
.io/coredns/coredns@sha256:74130b944396a0b0ca9af923ee6e03b08a35d98fc1bbaef4e35cf9acc5599105","registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e"],"repoTags":["registry.k8s.io/coredns/coredns:v1.10.1"],"size":"51393451"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":["registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"487479"},{"id":"b18bf71b941bae2e12db1c07e567ad14e4febbc778310a0fc64487f1ac877d79","repoDigests":["docker.io/kindest/kindnetd@sha256:2c39858b71cf6c5737ff0daa8130a6574d4c6bd2a7dacaf002060c02f2bc1b4f","docker.io/kindest/kindnetd@sha256:6c00e28db008c2afa67d9ee085c86184ec9ae5281d5ae1bd15006746fb9a1974"],"repoTags":["docker.io/kindest/kindnetd:v20230511-dc714da8"],"size":"60881430"},{"id":"20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d2
9e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf"],"repoTags":[],"size":"247562353"},{"id":"305d7ed1dae2877c3a80d434c5fb9f1aac1aa3d2431c36130a3fcd1970e93840","repoDigests":["registry.k8s.io/kube-scheduler@sha256:89e8c591cc58bd0e8651dddee3de290399b1ae5ad14779afe84779083fe05177","registry.k8s.io/kube-scheduler@sha256:e0ecd0ce2447789a58ad5e94acda2cff8ad4e6ca3ccc06041b89e7eb0b78a6c4"],"repoTags":["registry.k8s.io/kube-scheduler:v1.27.2"],"size":"57615158"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":["gcr.io/google-containers/addon-resizer@sha256:0ce7cf4876524f069adf654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126"],"repoTags":["gcr.io/google-containers/addon-resizer:functional-747104"],"size":"34114467"},{"id":"72c9df6be7f1b997e4a31b5cb9aa7262e5278905af97e6a69e341e3f0f9bbaae","repoDigests":["registry.k8s.io/kube-apiserver@sha256:599c991fe774036dff5f54b3113290d
83da173d7627ea259bd2a3064eaa7987e","registry.k8s.io/kube-apiserver@sha256:94e48585629fde3c1d06c6ae8f62885d3052f12a1072ffd97611296525eff5b9"],"repoTags":["registry.k8s.io/kube-apiserver:v1.27.2"],"size":"116138960"},{"id":"24bc64e911039ecf00e263be2161797c758b7d82403ca5516ab64047a477f737","repoDigests":["registry.k8s.io/etcd@sha256:1c19137e8a1716ce9f66c8c767bf114d7cad975db7a9784146486aa764f6dddd","registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83"],"repoTags":["registry.k8s.io/etcd:3.5.7-0"],"size":"182283991"},{"id":"29921a084542255eb81a1a660a603b1a24636d88b202f9010daae75fa32754c0","repoDigests":["registry.k8s.io/kube-proxy@sha256:1e4f13f5f5c215813fb9c9c6f56da1c0354363f2a69bd12732658f79d585864f","registry.k8s.io/kube-proxy@sha256:7ebc3b4df29c385197555a543c4a3379cfcdabdfbe37e2b2ea3ceac87ce28bca"],"repoTags":["registry.k8s.io/kube-proxy:v1.27.2"],"size":"68099991"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":["registry.k8s.io/p
ause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"528622"},{"id":"fe448b4338db324b73ecf40884b181f437f02706ad391beb847292f960d549bd","repoDigests":["docker.io/library/958e8fd1722bae1cd0ed5f1e671b10a16f5f6331b730cd997ffc24d6ed0c6637-tmp@sha256:d09332dfbf765ef96861e50aeed9cd8f5f8599f5bcbf5184a8a68de94c31c620"],"repoTags":[],"size":"1637644"},{"id":"71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:a77fe109c026308f149d36484d795b42efe0fd29b332be9071f63e1634c36ac9","gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b"],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1634527"}]
functional_test.go:267: (dbg) Stderr: out/minikube-linux-arm64 -p functional-747104 image ls --format json --alsologtostderr:
I0531 18:56:16.175924   34845 out.go:296] Setting OutFile to fd 1 ...
I0531 18:56:16.176123   34845 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0531 18:56:16.176142   34845 out.go:309] Setting ErrFile to fd 2...
I0531 18:56:16.176161   34845 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0531 18:56:16.176323   34845 root.go:336] Updating PATH: /home/jenkins/minikube-integration/16569-2389/.minikube/bin
I0531 18:56:16.176928   34845 config.go:182] Loaded profile config "functional-747104": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.2
I0531 18:56:16.177068   34845 config.go:182] Loaded profile config "functional-747104": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.2
I0531 18:56:16.177585   34845 cli_runner.go:164] Run: docker container inspect functional-747104 --format={{.State.Status}}
I0531 18:56:16.207831   34845 ssh_runner.go:195] Run: systemctl --version
I0531 18:56:16.207883   34845 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-747104
I0531 18:56:16.230693   34845 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32782 SSHKeyPath:/home/jenkins/minikube-integration/16569-2389/.minikube/machines/functional-747104/id_rsa Username:docker}
I0531 18:56:16.356748   34845 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:259: (dbg) Run:  out/minikube-linux-arm64 -p functional-747104 image ls --format yaml --alsologtostderr
functional_test.go:264: (dbg) Stdout: out/minikube-linux-arm64 -p functional-747104 image ls --format yaml --alsologtostderr:
- id: c42efe0b54387756e68d167a437aef21451f63eebd9330bb555367d67128386c
repoDigests:
- docker.io/library/nginx@sha256:0bb91b50c42bc6677acff40ea0f050b655c5c2cc1311e783097a04061191340b
- docker.io/library/nginx@sha256:af296b188c7b7df99ba960ca614439c99cb7cf252ed7bbc23e90cfda59092305
repoTags:
- docker.io/library/nginx:latest
size: "139751562"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests:
- gcr.io/google-containers/addon-resizer@sha256:0ce7cf4876524f069adf654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126
repoTags:
- gcr.io/google-containers/addon-resizer:functional-747104
size: "34114467"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29037500"
- id: 97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:74130b944396a0b0ca9af923ee6e03b08a35d98fc1bbaef4e35cf9acc5599105
- registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e
repoTags:
- registry.k8s.io/coredns/coredns:v1.10.1
size: "51393451"
- id: 72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests:
- registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "87536549"
- id: 305d7ed1dae2877c3a80d434c5fb9f1aac1aa3d2431c36130a3fcd1970e93840
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:89e8c591cc58bd0e8651dddee3de290399b1ae5ad14779afe84779083fe05177
- registry.k8s.io/kube-scheduler@sha256:e0ecd0ce2447789a58ad5e94acda2cff8ad4e6ca3ccc06041b89e7eb0b78a6c4
repoTags:
- registry.k8s.io/kube-scheduler:v1.27.2
size: "57615158"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests:
- registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca
repoTags:
- registry.k8s.io/pause:latest
size: "246070"
- id: 5ee47dcca7543750b3941b52e98f103bbbae9aaf574ab4dc018e1e7d34e505ad
repoDigests:
- docker.io/library/nginx@sha256:203cba3f56d7dba1d66b95c091db65a4f0778eb5d16e76151e73e0413e317328
- docker.io/library/nginx@sha256:2e776a66a3556f001aba13431b26e448fe8acba277bf93d2ab1a785571a46d90
repoTags:
- docker.io/library/nginx:alpine
size: "42810437"
- id: 24bc64e911039ecf00e263be2161797c758b7d82403ca5516ab64047a477f737
repoDigests:
- registry.k8s.io/etcd@sha256:1c19137e8a1716ce9f66c8c767bf114d7cad975db7a9784146486aa764f6dddd
- registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83
repoTags:
- registry.k8s.io/etcd:3.5.7-0
size: "182283991"
- id: 2ee705380c3c59a538b853cbe9ae9886ebbd0001a4cea4add5adeea48e5f48d4
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:6626c27b7df41d86340a701121792c5c0dc40ca8877c23478fc5659103bc7505
- registry.k8s.io/kube-controller-manager@sha256:b0990ef7c9ce9edd0f57355a7e4cb43a71e864bfd2cd55bc68e4998e00213b56
repoTags:
- registry.k8s.io/kube-controller-manager:v1.27.2
size: "108667702"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3774172"
- id: 72c9df6be7f1b997e4a31b5cb9aa7262e5278905af97e6a69e341e3f0f9bbaae
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:599c991fe774036dff5f54b3113290d83da173d7627ea259bd2a3064eaa7987e
- registry.k8s.io/kube-apiserver@sha256:94e48585629fde3c1d06c6ae8f62885d3052f12a1072ffd97611296525eff5b9
repoTags:
- registry.k8s.io/kube-apiserver:v1.27.2
size: "116138960"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests:
- registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67
repoTags:
- registry.k8s.io/pause:3.1
size: "528622"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests:
- registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476
repoTags:
- registry.k8s.io/pause:3.3
size: "487479"
- id: 829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e
repoDigests:
- registry.k8s.io/pause@sha256:3ec98b8452dc8ae265a6917dfb81587ac78849e520d5dbba6de524851d20eca6
- registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097
repoTags:
- registry.k8s.io/pause:3.9
size: "520014"
- id: b18bf71b941bae2e12db1c07e567ad14e4febbc778310a0fc64487f1ac877d79
repoDigests:
- docker.io/kindest/kindnetd@sha256:2c39858b71cf6c5737ff0daa8130a6574d4c6bd2a7dacaf002060c02f2bc1b4f
- docker.io/kindest/kindnetd@sha256:6c00e28db008c2afa67d9ee085c86184ec9ae5281d5ae1bd15006746fb9a1974
repoTags:
- docker.io/kindest/kindnetd:v20230511-dc714da8
size: "60881430"
- id: 29921a084542255eb81a1a660a603b1a24636d88b202f9010daae75fa32754c0
repoDigests:
- registry.k8s.io/kube-proxy@sha256:1e4f13f5f5c215813fb9c9c6f56da1c0354363f2a69bd12732658f79d585864f
- registry.k8s.io/kube-proxy@sha256:7ebc3b4df29c385197555a543c4a3379cfcdabdfbe37e2b2ea3ceac87ce28bca
repoTags:
- registry.k8s.io/kube-proxy:v1.27.2
size: "68099991"

                                                
                                                
functional_test.go:267: (dbg) Stderr: out/minikube-linux-arm64 -p functional-747104 image ls --format yaml --alsologtostderr:
I0531 18:56:10.701375   34488 out.go:296] Setting OutFile to fd 1 ...
I0531 18:56:10.701554   34488 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0531 18:56:10.701562   34488 out.go:309] Setting ErrFile to fd 2...
I0531 18:56:10.701568   34488 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0531 18:56:10.701757   34488 root.go:336] Updating PATH: /home/jenkins/minikube-integration/16569-2389/.minikube/bin
I0531 18:56:10.702404   34488 config.go:182] Loaded profile config "functional-747104": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.2
I0531 18:56:10.702555   34488 config.go:182] Loaded profile config "functional-747104": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.2
I0531 18:56:10.703190   34488 cli_runner.go:164] Run: docker container inspect functional-747104 --format={{.State.Status}}
I0531 18:56:10.730439   34488 ssh_runner.go:195] Run: systemctl --version
I0531 18:56:10.730491   34488 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-747104
I0531 18:56:10.763535   34488 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32782 SSHKeyPath:/home/jenkins/minikube-integration/16569-2389/.minikube/machines/functional-747104/id_rsa Username:docker}
I0531 18:56:10.896252   34488 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (5.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:306: (dbg) Run:  out/minikube-linux-arm64 -p functional-747104 ssh pgrep buildkitd
functional_test.go:306: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-747104 ssh pgrep buildkitd: exit status 1 (304.326248ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:313: (dbg) Run:  out/minikube-linux-arm64 -p functional-747104 image build -t localhost/my-image:functional-747104 testdata/build --alsologtostderr
functional_test.go:313: (dbg) Done: out/minikube-linux-arm64 -p functional-747104 image build -t localhost/my-image:functional-747104 testdata/build --alsologtostderr: (4.58462158s)
functional_test.go:318: (dbg) Stdout: out/minikube-linux-arm64 -p functional-747104 image build -t localhost/my-image:functional-747104 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> fe448b4338d
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-747104
--> 10ceccbd1e6
Successfully tagged localhost/my-image:functional-747104
10ceccbd1e6fea5c0a1b27d7288c4c1ee4d62cb5bf3d7fcaaa508e72d5cf5a03
functional_test.go:321: (dbg) Stderr: out/minikube-linux-arm64 -p functional-747104 image build -t localhost/my-image:functional-747104 testdata/build --alsologtostderr:
I0531 18:56:11.316163   34609 out.go:296] Setting OutFile to fd 1 ...
I0531 18:56:11.316404   34609 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0531 18:56:11.316414   34609 out.go:309] Setting ErrFile to fd 2...
I0531 18:56:11.316420   34609 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0531 18:56:11.316593   34609 root.go:336] Updating PATH: /home/jenkins/minikube-integration/16569-2389/.minikube/bin
I0531 18:56:11.317206   34609 config.go:182] Loaded profile config "functional-747104": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.2
I0531 18:56:11.317772   34609 config.go:182] Loaded profile config "functional-747104": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.2
I0531 18:56:11.318263   34609 cli_runner.go:164] Run: docker container inspect functional-747104 --format={{.State.Status}}
I0531 18:56:11.339113   34609 ssh_runner.go:195] Run: systemctl --version
I0531 18:56:11.339173   34609 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-747104
I0531 18:56:11.361563   34609 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32782 SSHKeyPath:/home/jenkins/minikube-integration/16569-2389/.minikube/machines/functional-747104/id_rsa Username:docker}
I0531 18:56:11.460873   34609 build_images.go:151] Building image from path: /tmp/build.328779182.tar
I0531 18:56:11.460942   34609 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0531 18:56:11.476004   34609 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.328779182.tar
I0531 18:56:11.480686   34609 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.328779182.tar: stat -c "%s %y" /var/lib/minikube/build/build.328779182.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.328779182.tar': No such file or directory
I0531 18:56:11.480717   34609 ssh_runner.go:362] scp /tmp/build.328779182.tar --> /var/lib/minikube/build/build.328779182.tar (3072 bytes)
I0531 18:56:11.511713   34609 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.328779182
I0531 18:56:11.524161   34609 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.328779182 -xf /var/lib/minikube/build/build.328779182.tar
I0531 18:56:11.535946   34609 crio.go:297] Building image: /var/lib/minikube/build/build.328779182
I0531 18:56:11.536039   34609 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-747104 /var/lib/minikube/build/build.328779182 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying config sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02
Writing manifest to image destination
Storing signatures
I0531 18:56:15.810546   34609 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-747104 /var/lib/minikube/build/build.328779182 --cgroup-manager=cgroupfs: (4.274483004s)
I0531 18:56:15.810608   34609 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.328779182
I0531 18:56:15.821830   34609 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.328779182.tar
I0531 18:56:15.832983   34609 build_images.go:207] Built localhost/my-image:functional-747104 from /tmp/build.328779182.tar
I0531 18:56:15.833011   34609 build_images.go:123] succeeded building to: functional-747104
I0531 18:56:15.833016   34609 build_images.go:124] failed building to: 
functional_test.go:446: (dbg) Run:  out/minikube-linux-arm64 -p functional-747104 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (5.17s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:340: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:340: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (1.78555321s)
functional_test.go:345: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-747104
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.81s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2114: (dbg) Run:  out/minikube-linux-arm64 -p functional-747104 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2114: (dbg) Run:  out/minikube-linux-arm64 -p functional-747104 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2114: (dbg) Run:  out/minikube-linux-arm64 -p functional-747104 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (6.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:353: (dbg) Run:  out/minikube-linux-arm64 -p functional-747104 image load --daemon gcr.io/google-containers/addon-resizer:functional-747104 --alsologtostderr
functional_test.go:353: (dbg) Done: out/minikube-linux-arm64 -p functional-747104 image load --daemon gcr.io/google-containers/addon-resizer:functional-747104 --alsologtostderr: (5.892461566s)
functional_test.go:446: (dbg) Run:  out/minikube-linux-arm64 -p functional-747104 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (6.16s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (11.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1435: (dbg) Run:  kubectl --context functional-747104 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1443: (dbg) Run:  kubectl --context functional-747104 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1448: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-7b684b55f9-l5w22" [56ab7feb-049f-4cd9-9008-ae5101559a6b] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-7b684b55f9-l5w22" [56ab7feb-049f-4cd9-9008-ae5101559a6b] Running
functional_test.go:1448: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 11.056052529s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (11.43s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.99s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:363: (dbg) Run:  out/minikube-linux-arm64 -p functional-747104 image load --daemon gcr.io/google-containers/addon-resizer:functional-747104 --alsologtostderr
functional_test.go:363: (dbg) Done: out/minikube-linux-arm64 -p functional-747104 image load --daemon gcr.io/google-containers/addon-resizer:functional-747104 --alsologtostderr: (2.753845256s)
functional_test.go:446: (dbg) Run:  out/minikube-linux-arm64 -p functional-747104 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.99s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (6.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:233: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:233: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (1.755108179s)
functional_test.go:238: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-747104
functional_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-747104 image load --daemon gcr.io/google-containers/addon-resizer:functional-747104 --alsologtostderr
functional_test.go:243: (dbg) Done: out/minikube-linux-arm64 -p functional-747104 image load --daemon gcr.io/google-containers/addon-resizer:functional-747104 --alsologtostderr: (4.06269745s)
functional_test.go:446: (dbg) Run:  out/minikube-linux-arm64 -p functional-747104 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (6.15s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1457: (dbg) Run:  out/minikube-linux-arm64 -p functional-747104 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1487: (dbg) Run:  out/minikube-linux-arm64 -p functional-747104 service list -o json
functional_test.go:1492: Took "470.946006ms" to run "out/minikube-linux-arm64 -p functional-747104 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1507: (dbg) Run:  out/minikube-linux-arm64 -p functional-747104 service --namespace=default --https --url hello-node
functional_test.go:1520: found endpoint: https://192.168.49.2:31134
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1538: (dbg) Run:  out/minikube-linux-arm64 -p functional-747104 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1557: (dbg) Run:  out/minikube-linux-arm64 -p functional-747104 service hello-node --url
functional_test.go:1563: found endpoint for hello-node: http://192.168.49.2:31134
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.61s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:378: (dbg) Run:  out/minikube-linux-arm64 -p functional-747104 image save gcr.io/google-containers/addon-resizer:functional-747104 /home/jenkins/workspace/Docker_Linux_crio_arm64/addon-resizer-save.tar --alsologtostderr
functional_test.go:378: (dbg) Done: out/minikube-linux-arm64 -p functional-747104 image save gcr.io/google-containers/addon-resizer:functional-747104 /home/jenkins/workspace/Docker_Linux_crio_arm64/addon-resizer-save.tar --alsologtostderr: (1.049960396s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.05s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-747104 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-747104 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-747104 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 31464: os: process already finished
helpers_test.go:508: unable to kill pid 31311: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-747104 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.85s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-747104 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:390: (dbg) Run:  out/minikube-linux-arm64 -p functional-747104 image rm gcr.io/google-containers/addon-resizer:functional-747104 --alsologtostderr
functional_test.go:446: (dbg) Run:  out/minikube-linux-arm64 -p functional-747104 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.79s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-747104 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [979c0af7-3567-49e6-945d-7e407dbb6872] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [979c0af7-3567-49e6-945d-7e407dbb6872] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 10.007658354s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.61s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:407: (dbg) Run:  out/minikube-linux-arm64 -p functional-747104 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/addon-resizer-save.tar --alsologtostderr
functional_test.go:407: (dbg) Done: out/minikube-linux-arm64 -p functional-747104 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/addon-resizer-save.tar --alsologtostderr: (1.17450052s)
functional_test.go:446: (dbg) Run:  out/minikube-linux-arm64 -p functional-747104 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.88s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (3.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:417: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-747104
functional_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p functional-747104 image save --daemon gcr.io/google-containers/addon-resizer:functional-747104 --alsologtostderr
functional_test.go:422: (dbg) Done: out/minikube-linux-arm64 -p functional-747104 image save --daemon gcr.io/google-containers/addon-resizer:functional-747104 --alsologtostderr: (3.087520548s)
functional_test.go:427: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-747104
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (3.14s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-747104 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.108.157.54 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-747104 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1268: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1273: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1308: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1313: Took "337.278703ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1322: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1327: Took "67.541421ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1359: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1364: Took "320.525635ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1372: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1377: Took "55.248383ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (8.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-747104 /tmp/TestFunctionalparallelMountCmdany-port2611835550/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1685559355577420177" to /tmp/TestFunctionalparallelMountCmdany-port2611835550/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1685559355577420177" to /tmp/TestFunctionalparallelMountCmdany-port2611835550/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1685559355577420177" to /tmp/TestFunctionalparallelMountCmdany-port2611835550/001/test-1685559355577420177
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-747104 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-747104 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (343.948673ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-747104 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-747104 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 May 31 18:55 created-by-test
-rw-r--r-- 1 docker docker 24 May 31 18:55 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 May 31 18:55 test-1685559355577420177
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-747104 ssh cat /mount-9p/test-1685559355577420177
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-747104 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [dc0b7be9-40bf-413a-9c09-457127b2020b] Pending
helpers_test.go:344: "busybox-mount" [dc0b7be9-40bf-413a-9c09-457127b2020b] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [dc0b7be9-40bf-413a-9c09-457127b2020b] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [dc0b7be9-40bf-413a-9c09-457127b2020b] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.008436803s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-747104 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-747104 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-747104 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-747104 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-747104 /tmp/TestFunctionalparallelMountCmdany-port2611835550/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (8.37s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-747104 /tmp/TestFunctionalparallelMountCmdspecific-port2832748319/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-747104 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-747104 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (368.742075ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-747104 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-747104 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-747104 /tmp/TestFunctionalparallelMountCmdspecific-port2832748319/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-747104 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-747104 ssh "sudo umount -f /mount-9p": exit status 1 (300.575715ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-747104 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-747104 /tmp/TestFunctionalparallelMountCmdspecific-port2832748319/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.19s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-747104 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1095620344/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-747104 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1095620344/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-747104 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1095620344/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-747104 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-747104 ssh "findmnt -T" /mount1: exit status 1 (591.366133ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-747104 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-747104 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-747104 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-747104 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-747104 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1095620344/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-747104 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1095620344/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-747104 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1095620344/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.91s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.08s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:188: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:188: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-747104
--- PASS: TestFunctional/delete_addon-resizer_images (0.08s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:196: (dbg) Run:  docker rmi -f localhost/my-image:functional-747104
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:204: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-747104
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (117.75s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-linux-arm64 start -p ingress-addon-legacy-546551 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
E0531 18:57:05.254162    7804 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/addons-748280/client.crt: no such file or directory
E0531 18:57:32.942864    7804 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/addons-748280/client.crt: no such file or directory
ingress_addon_legacy_test.go:39: (dbg) Done: out/minikube-linux-arm64 start -p ingress-addon-legacy-546551 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (1m57.749134281s)
--- PASS: TestIngressAddonLegacy/StartLegacyK8sCluster (117.75s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (11.26s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-546551 addons enable ingress --alsologtostderr -v=5
ingress_addon_legacy_test.go:70: (dbg) Done: out/minikube-linux-arm64 -p ingress-addon-legacy-546551 addons enable ingress --alsologtostderr -v=5: (11.257625005s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (11.26s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.53s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-546551 addons enable ingress-dns --alsologtostderr -v=5
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.53s)

                                                
                                    
x
+
TestJSONOutput/start/Command (78.67s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-536144 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio
E0531 19:01:40.442848    7804 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/functional-747104/client.crt: no such file or directory
E0531 19:02:05.253215    7804 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/addons-748280/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-536144 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio: (1m18.668015275s)
--- PASS: TestJSONOutput/start/Command (78.67s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.8s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-536144 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.80s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.72s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-536144 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.72s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.86s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-536144 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-536144 --output=json --user=testUser: (5.855158733s)
--- PASS: TestJSONOutput/stop/Command (5.86s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.23s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-227719 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-227719 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (76.117894ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"d109510c-891e-4b2c-b0f2-9c907a57e938","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-227719] minikube v1.30.1 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"4e83109f-657c-4ed1-9522-7898432b5395","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=16569"}}
	{"specversion":"1.0","id":"b1dceb84-aedb-43a7-8f28-66696f7bca4d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"68375f3b-6004-4ed5-b2b4-fdeeb35fb287","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/16569-2389/kubeconfig"}}
	{"specversion":"1.0","id":"41c52a44-fef1-4d20-996b-6daf368ceb6d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/16569-2389/.minikube"}}
	{"specversion":"1.0","id":"3af0430b-d7cf-4fa0-8f24-749d92d3b2f4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"9bd20cda-fc3c-4ecb-8237-7a5d9811a955","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"f914628d-2238-4518-bbf3-18924a732457","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-227719" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-227719
--- PASS: TestErrorJSONOutput (0.23s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (56.71s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-840963 --network=
E0531 19:03:02.362987    7804 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/functional-747104/client.crt: no such file or directory
E0531 19:03:31.867749    7804 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/ingress-addon-legacy-546551/client.crt: no such file or directory
E0531 19:03:31.873023    7804 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/ingress-addon-legacy-546551/client.crt: no such file or directory
E0531 19:03:31.883298    7804 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/ingress-addon-legacy-546551/client.crt: no such file or directory
E0531 19:03:31.903557    7804 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/ingress-addon-legacy-546551/client.crt: no such file or directory
E0531 19:03:31.943814    7804 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/ingress-addon-legacy-546551/client.crt: no such file or directory
E0531 19:03:32.024088    7804 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/ingress-addon-legacy-546551/client.crt: no such file or directory
E0531 19:03:32.184455    7804 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/ingress-addon-legacy-546551/client.crt: no such file or directory
E0531 19:03:32.504997    7804 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/ingress-addon-legacy-546551/client.crt: no such file or directory
E0531 19:03:33.145883    7804 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/ingress-addon-legacy-546551/client.crt: no such file or directory
E0531 19:03:34.426430    7804 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/ingress-addon-legacy-546551/client.crt: no such file or directory
E0531 19:03:36.986874    7804 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/ingress-addon-legacy-546551/client.crt: no such file or directory
E0531 19:03:42.107982    7804 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/ingress-addon-legacy-546551/client.crt: no such file or directory
E0531 19:03:52.348132    7804 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/ingress-addon-legacy-546551/client.crt: no such file or directory
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-840963 --network=: (54.475695904s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-840963" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-840963
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-840963: (2.206875523s)
--- PASS: TestKicCustomNetwork/create_custom_network (56.71s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (36.85s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-631368 --network=bridge
E0531 19:04:12.828357    7804 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/ingress-addon-legacy-546551/client.crt: no such file or directory
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-631368 --network=bridge: (34.795656151s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-631368" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-631368
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-631368: (2.033295702s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (36.85s)

                                                
                                    
x
+
TestKicExistingNetwork (33.18s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-530647 --network=existing-network
E0531 19:04:53.788585    7804 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/ingress-addon-legacy-546551/client.crt: no such file or directory
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-530647 --network=existing-network: (31.392995956s)
helpers_test.go:175: Cleaning up "existing-network-530647" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-530647
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-530647: (1.625464483s)
--- PASS: TestKicExistingNetwork (33.18s)

                                                
                                    
x
+
TestKicCustomSubnet (34.67s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-815765 --subnet=192.168.60.0/24
E0531 19:05:18.520986    7804 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/functional-747104/client.crt: no such file or directory
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-815765 --subnet=192.168.60.0/24: (32.896897545s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-815765 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-815765" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-815765
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-815765: (1.74527081s)
--- PASS: TestKicCustomSubnet (34.67s)

                                                
                                    
x
+
TestKicStaticIP (38.97s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-524741 --static-ip=192.168.200.200
E0531 19:05:46.204011    7804 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/functional-747104/client.crt: no such file or directory
E0531 19:06:15.709612    7804 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/ingress-addon-legacy-546551/client.crt: no such file or directory
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-524741 --static-ip=192.168.200.200: (36.724286722s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-524741 ip
helpers_test.go:175: Cleaning up "static-ip-524741" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-524741
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-524741: (2.083245528s)
--- PASS: TestKicStaticIP (38.97s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (68.26s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-986104 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-986104 --driver=docker  --container-runtime=crio: (31.015811103s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-988787 --driver=docker  --container-runtime=crio
E0531 19:07:05.254002    7804 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/addons-748280/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-988787 --driver=docker  --container-runtime=crio: (31.75593591s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-986104
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-988787
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-988787" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-988787
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-988787: (2.007247363s)
helpers_test.go:175: Cleaning up "first-986104" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-986104
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-986104: (2.272510248s)
--- PASS: TestMinikubeProfile (68.26s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (7.84s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-055738 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-055738 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (6.844523522s)
--- PASS: TestMountStart/serial/StartWithMountFirst (7.84s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-055738 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.28s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (6.7s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-057691 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-057691 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (5.697497083s)
--- PASS: TestMountStart/serial/StartWithMountSecond (6.70s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-057691 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.26s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.74s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-055738 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-055738 --alsologtostderr -v=5: (1.741067314s)
--- PASS: TestMountStart/serial/DeleteFirst (1.74s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-057691 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.23s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-057691
mount_start_test.go:155: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-057691: (1.231598017s)
--- PASS: TestMountStart/serial/Stop (1.23s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.97s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-057691
mount_start_test.go:166: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-057691: (6.964987764s)
--- PASS: TestMountStart/serial/RestartStopped (7.97s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-057691 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (69.59s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:85: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-025078 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
E0531 19:08:28.303071    7804 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/addons-748280/client.crt: no such file or directory
E0531 19:08:31.867596    7804 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/ingress-addon-legacy-546551/client.crt: no such file or directory
E0531 19:08:59.550372    7804 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/ingress-addon-legacy-546551/client.crt: no such file or directory
multinode_test.go:85: (dbg) Done: out/minikube-linux-arm64 start -p multinode-025078 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (1m9.050215205s)
multinode_test.go:91: (dbg) Run:  out/minikube-linux-arm64 -p multinode-025078 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (69.59s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (6.94s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:481: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-025078 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:486: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-025078 -- rollout status deployment/busybox
multinode_test.go:486: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-025078 -- rollout status deployment/busybox: (4.821763205s)
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-025078 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:516: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-025078 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:524: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-025078 -- exec busybox-67b7f59bb-9zwlk -- nslookup kubernetes.io
multinode_test.go:524: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-025078 -- exec busybox-67b7f59bb-fn4vn -- nslookup kubernetes.io
multinode_test.go:534: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-025078 -- exec busybox-67b7f59bb-9zwlk -- nslookup kubernetes.default
multinode_test.go:534: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-025078 -- exec busybox-67b7f59bb-fn4vn -- nslookup kubernetes.default
multinode_test.go:542: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-025078 -- exec busybox-67b7f59bb-9zwlk -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:542: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-025078 -- exec busybox-67b7f59bb-fn4vn -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (6.94s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (46.62s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:110: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-025078 -v 3 --alsologtostderr
multinode_test.go:110: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-025078 -v 3 --alsologtostderr: (45.887594436s)
multinode_test.go:116: (dbg) Run:  out/minikube-linux-arm64 -p multinode-025078 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (46.62s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.35s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:132: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.35s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10.46s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:173: (dbg) Run:  out/minikube-linux-arm64 -p multinode-025078 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-025078 cp testdata/cp-test.txt multinode-025078:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-025078 ssh -n multinode-025078 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-025078 cp multinode-025078:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2934900605/001/cp-test_multinode-025078.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-025078 ssh -n multinode-025078 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-025078 cp multinode-025078:/home/docker/cp-test.txt multinode-025078-m02:/home/docker/cp-test_multinode-025078_multinode-025078-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-025078 ssh -n multinode-025078 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-025078 ssh -n multinode-025078-m02 "sudo cat /home/docker/cp-test_multinode-025078_multinode-025078-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-025078 cp multinode-025078:/home/docker/cp-test.txt multinode-025078-m03:/home/docker/cp-test_multinode-025078_multinode-025078-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-025078 ssh -n multinode-025078 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-025078 ssh -n multinode-025078-m03 "sudo cat /home/docker/cp-test_multinode-025078_multinode-025078-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-025078 cp testdata/cp-test.txt multinode-025078-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-025078 ssh -n multinode-025078-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-025078 cp multinode-025078-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2934900605/001/cp-test_multinode-025078-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-025078 ssh -n multinode-025078-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-025078 cp multinode-025078-m02:/home/docker/cp-test.txt multinode-025078:/home/docker/cp-test_multinode-025078-m02_multinode-025078.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-025078 ssh -n multinode-025078-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-025078 ssh -n multinode-025078 "sudo cat /home/docker/cp-test_multinode-025078-m02_multinode-025078.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-025078 cp multinode-025078-m02:/home/docker/cp-test.txt multinode-025078-m03:/home/docker/cp-test_multinode-025078-m02_multinode-025078-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-025078 ssh -n multinode-025078-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-025078 ssh -n multinode-025078-m03 "sudo cat /home/docker/cp-test_multinode-025078-m02_multinode-025078-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-025078 cp testdata/cp-test.txt multinode-025078-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-025078 ssh -n multinode-025078-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-025078 cp multinode-025078-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2934900605/001/cp-test_multinode-025078-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-025078 ssh -n multinode-025078-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-025078 cp multinode-025078-m03:/home/docker/cp-test.txt multinode-025078:/home/docker/cp-test_multinode-025078-m03_multinode-025078.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-025078 ssh -n multinode-025078-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-025078 ssh -n multinode-025078 "sudo cat /home/docker/cp-test_multinode-025078-m03_multinode-025078.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-025078 cp multinode-025078-m03:/home/docker/cp-test.txt multinode-025078-m02:/home/docker/cp-test_multinode-025078-m03_multinode-025078-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-025078 ssh -n multinode-025078-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-025078 ssh -n multinode-025078-m02 "sudo cat /home/docker/cp-test_multinode-025078-m03_multinode-025078-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (10.46s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.35s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:210: (dbg) Run:  out/minikube-linux-arm64 -p multinode-025078 node stop m03
multinode_test.go:210: (dbg) Done: out/minikube-linux-arm64 -p multinode-025078 node stop m03: (1.240959315s)
multinode_test.go:216: (dbg) Run:  out/minikube-linux-arm64 -p multinode-025078 status
multinode_test.go:216: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-025078 status: exit status 7 (561.756337ms)

                                                
                                                
-- stdout --
	multinode-025078
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-025078-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-025078-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:223: (dbg) Run:  out/minikube-linux-arm64 -p multinode-025078 status --alsologtostderr
multinode_test.go:223: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-025078 status --alsologtostderr: exit status 7 (543.369927ms)

                                                
                                                
-- stdout --
	multinode-025078
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-025078-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-025078-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0531 19:10:17.893681   81577 out.go:296] Setting OutFile to fd 1 ...
	I0531 19:10:17.893871   81577 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0531 19:10:17.893928   81577 out.go:309] Setting ErrFile to fd 2...
	I0531 19:10:17.893950   81577 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0531 19:10:17.894138   81577 root.go:336] Updating PATH: /home/jenkins/minikube-integration/16569-2389/.minikube/bin
	I0531 19:10:17.894350   81577 out.go:303] Setting JSON to false
	I0531 19:10:17.894413   81577 mustload.go:65] Loading cluster: multinode-025078
	I0531 19:10:17.894539   81577 notify.go:220] Checking for updates...
	I0531 19:10:17.894879   81577 config.go:182] Loaded profile config "multinode-025078": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.2
	I0531 19:10:17.894926   81577 status.go:255] checking status of multinode-025078 ...
	I0531 19:10:17.895479   81577 cli_runner.go:164] Run: docker container inspect multinode-025078 --format={{.State.Status}}
	I0531 19:10:17.917319   81577 status.go:330] multinode-025078 host status = "Running" (err=<nil>)
	I0531 19:10:17.917366   81577 host.go:66] Checking if "multinode-025078" exists ...
	I0531 19:10:17.917655   81577 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-025078
	I0531 19:10:17.937182   81577 host.go:66] Checking if "multinode-025078" exists ...
	I0531 19:10:17.937499   81577 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0531 19:10:17.937542   81577 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-025078
	I0531 19:10:17.973980   81577 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32847 SSHKeyPath:/home/jenkins/minikube-integration/16569-2389/.minikube/machines/multinode-025078/id_rsa Username:docker}
	I0531 19:10:18.065643   81577 ssh_runner.go:195] Run: systemctl --version
	I0531 19:10:18.071692   81577 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0531 19:10:18.086236   81577 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0531 19:10:18.150763   81577 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:40 OomKillDisable:true NGoroutines:55 SystemTime:2023-05-31 19:10:18.140451554 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1037-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215166976 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.5] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.18.1]] Warnings:<nil>}}
	I0531 19:10:18.151381   81577 kubeconfig.go:92] found "multinode-025078" server: "https://192.168.58.2:8443"
	I0531 19:10:18.151405   81577 api_server.go:166] Checking apiserver status ...
	I0531 19:10:18.151451   81577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 19:10:18.164823   81577 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1239/cgroup
	I0531 19:10:18.178472   81577 api_server.go:182] apiserver freezer: "3:freezer:/docker/9d3cbea3acdb0b8d1b32dece59d31af9111f937e060f05b7083a59033f8e8705/crio/crio-57c382e3990cce47ca0b19a8b8fbc37e4b7396fa1cec72790fd3508bc03f1936"
	I0531 19:10:18.178540   81577 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/9d3cbea3acdb0b8d1b32dece59d31af9111f937e060f05b7083a59033f8e8705/crio/crio-57c382e3990cce47ca0b19a8b8fbc37e4b7396fa1cec72790fd3508bc03f1936/freezer.state
	I0531 19:10:18.190564   81577 api_server.go:204] freezer state: "THAWED"
	I0531 19:10:18.190592   81577 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0531 19:10:18.199777   81577 api_server.go:279] https://192.168.58.2:8443/healthz returned 200:
	ok
	I0531 19:10:18.199807   81577 status.go:421] multinode-025078 apiserver status = Running (err=<nil>)
	I0531 19:10:18.199818   81577 status.go:257] multinode-025078 status: &{Name:multinode-025078 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0531 19:10:18.199835   81577 status.go:255] checking status of multinode-025078-m02 ...
	I0531 19:10:18.200188   81577 cli_runner.go:164] Run: docker container inspect multinode-025078-m02 --format={{.State.Status}}
	I0531 19:10:18.218682   81577 status.go:330] multinode-025078-m02 host status = "Running" (err=<nil>)
	I0531 19:10:18.218725   81577 host.go:66] Checking if "multinode-025078-m02" exists ...
	I0531 19:10:18.219058   81577 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-025078-m02
	I0531 19:10:18.239293   81577 host.go:66] Checking if "multinode-025078-m02" exists ...
	I0531 19:10:18.239605   81577 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0531 19:10:18.239647   81577 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-025078-m02
	I0531 19:10:18.260080   81577 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32852 SSHKeyPath:/home/jenkins/minikube-integration/16569-2389/.minikube/machines/multinode-025078-m02/id_rsa Username:docker}
	I0531 19:10:18.353817   81577 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0531 19:10:18.367896   81577 status.go:257] multinode-025078-m02 status: &{Name:multinode-025078-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0531 19:10:18.367925   81577 status.go:255] checking status of multinode-025078-m03 ...
	I0531 19:10:18.368261   81577 cli_runner.go:164] Run: docker container inspect multinode-025078-m03 --format={{.State.Status}}
	I0531 19:10:18.386546   81577 status.go:330] multinode-025078-m03 host status = "Stopped" (err=<nil>)
	I0531 19:10:18.386573   81577 status.go:343] host is not running, skipping remaining checks
	I0531 19:10:18.386580   81577 status.go:257] multinode-025078-m03 status: &{Name:multinode-025078-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.35s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (12.41s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:244: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-025078 node start m03 --alsologtostderr
E0531 19:10:18.521005    7804 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/functional-747104/client.crt: no such file or directory
multinode_test.go:254: (dbg) Done: out/minikube-linux-arm64 -p multinode-025078 node start m03 --alsologtostderr: (11.604472235s)
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-025078 status
multinode_test.go:275: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (12.41s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (117.54s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:283: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-025078
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-025078
multinode_test.go:290: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-025078: (24.990255856s)
multinode_test.go:295: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-025078 --wait=true -v=8 --alsologtostderr
E0531 19:12:05.253565    7804 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/addons-748280/client.crt: no such file or directory
multinode_test.go:295: (dbg) Done: out/minikube-linux-arm64 start -p multinode-025078 --wait=true -v=8 --alsologtostderr: (1m32.412255518s)
multinode_test.go:300: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-025078
--- PASS: TestMultiNode/serial/RestartKeepsNodes (117.54s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.13s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:394: (dbg) Run:  out/minikube-linux-arm64 -p multinode-025078 node delete m03
multinode_test.go:394: (dbg) Done: out/minikube-linux-arm64 -p multinode-025078 node delete m03: (4.355029424s)
multinode_test.go:400: (dbg) Run:  out/minikube-linux-arm64 -p multinode-025078 status --alsologtostderr
multinode_test.go:414: (dbg) Run:  docker volume ls
multinode_test.go:424: (dbg) Run:  kubectl get nodes
multinode_test.go:432: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.13s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (24s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 -p multinode-025078 stop
multinode_test.go:314: (dbg) Done: out/minikube-linux-arm64 -p multinode-025078 stop: (23.824445915s)
multinode_test.go:320: (dbg) Run:  out/minikube-linux-arm64 -p multinode-025078 status
multinode_test.go:320: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-025078 status: exit status 7 (90.202991ms)

                                                
                                                
-- stdout --
	multinode-025078
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-025078-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:327: (dbg) Run:  out/minikube-linux-arm64 -p multinode-025078 status --alsologtostderr
multinode_test.go:327: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-025078 status --alsologtostderr: exit status 7 (88.073278ms)

                                                
                                                
-- stdout --
	multinode-025078
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-025078-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0531 19:12:57.424326   89595 out.go:296] Setting OutFile to fd 1 ...
	I0531 19:12:57.424495   89595 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0531 19:12:57.424506   89595 out.go:309] Setting ErrFile to fd 2...
	I0531 19:12:57.424513   89595 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0531 19:12:57.424677   89595 root.go:336] Updating PATH: /home/jenkins/minikube-integration/16569-2389/.minikube/bin
	I0531 19:12:57.424855   89595 out.go:303] Setting JSON to false
	I0531 19:12:57.424894   89595 mustload.go:65] Loading cluster: multinode-025078
	I0531 19:12:57.424982   89595 notify.go:220] Checking for updates...
	I0531 19:12:57.425281   89595 config.go:182] Loaded profile config "multinode-025078": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.2
	I0531 19:12:57.425290   89595 status.go:255] checking status of multinode-025078 ...
	I0531 19:12:57.426121   89595 cli_runner.go:164] Run: docker container inspect multinode-025078 --format={{.State.Status}}
	I0531 19:12:57.446296   89595 status.go:330] multinode-025078 host status = "Stopped" (err=<nil>)
	I0531 19:12:57.446320   89595 status.go:343] host is not running, skipping remaining checks
	I0531 19:12:57.446328   89595 status.go:257] multinode-025078 status: &{Name:multinode-025078 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0531 19:12:57.446350   89595 status.go:255] checking status of multinode-025078-m02 ...
	I0531 19:12:57.446680   89595 cli_runner.go:164] Run: docker container inspect multinode-025078-m02 --format={{.State.Status}}
	I0531 19:12:57.464479   89595 status.go:330] multinode-025078-m02 host status = "Stopped" (err=<nil>)
	I0531 19:12:57.464501   89595 status.go:343] host is not running, skipping remaining checks
	I0531 19:12:57.464509   89595 status.go:257] multinode-025078-m02 status: &{Name:multinode-025078-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (24.00s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (80.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:344: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:354: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-025078 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
E0531 19:13:31.867708    7804 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/ingress-addon-legacy-546551/client.crt: no such file or directory
multinode_test.go:354: (dbg) Done: out/minikube-linux-arm64 start -p multinode-025078 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (1m19.352772884s)
multinode_test.go:360: (dbg) Run:  out/minikube-linux-arm64 -p multinode-025078 status --alsologtostderr
multinode_test.go:374: (dbg) Run:  kubectl get nodes
multinode_test.go:382: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (80.08s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (39.5s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:443: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-025078
multinode_test.go:452: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-025078-m02 --driver=docker  --container-runtime=crio
multinode_test.go:452: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-025078-m02 --driver=docker  --container-runtime=crio: exit status 14 (75.237657ms)

                                                
                                                
-- stdout --
	* [multinode-025078-m02] minikube v1.30.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=16569
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/16569-2389/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/16569-2389/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-025078-m02' is duplicated with machine name 'multinode-025078-m02' in profile 'multinode-025078'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:460: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-025078-m03 --driver=docker  --container-runtime=crio
multinode_test.go:460: (dbg) Done: out/minikube-linux-arm64 start -p multinode-025078-m03 --driver=docker  --container-runtime=crio: (36.877113024s)
multinode_test.go:467: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-025078
multinode_test.go:467: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-025078: exit status 80 (519.468816ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-025078
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-025078-m03 already exists in multinode-025078-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-025078-m03
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-025078-m03: (1.973711088s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (39.50s)

                                                
                                    
x
+
TestScheduledStopUnix (112.14s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-817422 --memory=2048 --driver=docker  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-817422 --memory=2048 --driver=docker  --container-runtime=crio: (36.046115501s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-817422 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-817422 -n scheduled-stop-817422
E0531 19:18:31.867529    7804 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/ingress-addon-legacy-546551/client.crt: no such file or directory
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-817422 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-817422 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-817422 -n scheduled-stop-817422
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-817422
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-817422 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-817422
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-817422: exit status 7 (69.185467ms)

                                                
                                                
-- stdout --
	scheduled-stop-817422
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-817422 -n scheduled-stop-817422
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-817422 -n scheduled-stop-817422: exit status 7 (67.507336ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-817422" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-817422
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-817422: (4.548276952s)
--- PASS: TestScheduledStopUnix (112.14s)

                                                
                                    
x
+
TestInsufficientStorage (10.63s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-158364 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio
E0531 19:19:54.910577    7804 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/ingress-addon-legacy-546551/client.crt: no such file or directory
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-158364 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (8.089280942s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"45a017b9-9abe-48df-b59f-669dc345ac7b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-158364] minikube v1.30.1 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"d6efc2bc-c280-4d7a-aa1c-6d04c2ba8856","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=16569"}}
	{"specversion":"1.0","id":"28b8f2c1-4e31-4d97-8c8f-edc5d47345b8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"5aa6d1b4-6f74-4308-b187-1b443ba1954b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/16569-2389/kubeconfig"}}
	{"specversion":"1.0","id":"be3673ed-0f11-4252-bd96-8450c872da0a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/16569-2389/.minikube"}}
	{"specversion":"1.0","id":"3dbcd939-9025-46bd-b90b-679339573ad4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"b3d1307d-2e1b-4d09-a05f-1ff2bacbfea4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"675a19a4-5983-4e22-a2b6-7a5b78d7263f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"ba6c8c7d-33da-4b8a-8e12-0c04b068297d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"f04ec6c0-3113-4e33-b820-ef19e06024c7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"6a5ecea1-19fc-4061-a44e-c120bc88ad8c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"e033339b-fa5a-43c6-8a0d-f647ba80665c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting control plane node insufficient-storage-158364 in cluster insufficient-storage-158364","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"d0a24d15-98ec-44e2-a508-0d6696da302d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"df10f432-bafb-440a-9dde-9bea2ebfc97d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"9e03ec62-1915-4921-babd-c369bc5ed9a2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100%% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-158364 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-158364 --output=json --layout=cluster: exit status 7 (312.831022ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-158364","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.30.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-158364","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0531 19:19:56.095065  106939 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-158364" does not appear in /home/jenkins/minikube-integration/16569-2389/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-158364 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-158364 --output=json --layout=cluster: exit status 7 (299.255365ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-158364","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.30.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-158364","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0531 19:19:56.395425  106993 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-158364" does not appear in /home/jenkins/minikube-integration/16569-2389/kubeconfig
	E0531 19:19:56.407775  106993 status.go:559] unable to read event log: stat: stat /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/insufficient-storage-158364/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-158364" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-158364
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-158364: (1.931060112s)
--- PASS: TestInsufficientStorage (10.63s)

                                                
                                    
x
+
TestKubernetesUpgrade (386.78s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:234: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-843072 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:234: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-843072 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (1m1.690151959s)
version_upgrade_test.go:239: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-843072
E0531 19:22:05.253211    7804 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/addons-748280/client.crt: no such file or directory
version_upgrade_test.go:239: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-843072: (14.399489268s)
version_upgrade_test.go:244: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-843072 status --format={{.Host}}
version_upgrade_test.go:244: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-843072 status --format={{.Host}}: exit status 7 (93.517912ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:246: status error: exit status 7 (may be ok)
version_upgrade_test.go:255: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-843072 --memory=2200 --kubernetes-version=v1.27.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E0531 19:23:31.868452    7804 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/ingress-addon-legacy-546551/client.crt: no such file or directory
version_upgrade_test.go:255: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-843072 --memory=2200 --kubernetes-version=v1.27.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m40.228046635s)
version_upgrade_test.go:260: (dbg) Run:  kubectl --context kubernetes-upgrade-843072 version --output=json
version_upgrade_test.go:279: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:281: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-843072 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:281: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-843072 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker  --container-runtime=crio: exit status 106 (79.871639ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-843072] minikube v1.30.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=16569
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/16569-2389/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/16569-2389/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.27.2 cluster to v1.16.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.16.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-843072
	    minikube start -p kubernetes-upgrade-843072 --kubernetes-version=v1.16.0
	    
	    2) Create a second cluster with Kubernetes 1.16.0, by running:
	    
	    minikube start -p kubernetes-upgrade-8430722 --kubernetes-version=v1.16.0
	    
	    3) Use the existing cluster at version Kubernetes 1.27.2, by running:
	    
	    minikube start -p kubernetes-upgrade-843072 --kubernetes-version=v1.27.2
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:285: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:287: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-843072 --memory=2200 --kubernetes-version=v1.27.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E0531 19:27:05.253858    7804 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/addons-748280/client.crt: no such file or directory
version_upgrade_test.go:287: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-843072 --memory=2200 --kubernetes-version=v1.27.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (27.945171671s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-843072" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-843072
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-843072: (2.206754596s)
--- PASS: TestKubernetesUpgrade (386.78s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-969645 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-969645 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio: exit status 14 (88.143685ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-969645] minikube v1.30.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=16569
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/16569-2389/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/16569-2389/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (36.48s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-969645 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-969645 --driver=docker  --container-runtime=crio: (36.12135872s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-969645 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (36.48s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (6.64s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-969645 --no-kubernetes --driver=docker  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-969645 --no-kubernetes --driver=docker  --container-runtime=crio: (4.372364132s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-969645 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-969645 status -o json: exit status 2 (334.003986ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-969645","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-969645
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-969645: (1.928648396s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (6.64s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (8.41s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-969645 --no-kubernetes --driver=docker  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-969645 --no-kubernetes --driver=docker  --container-runtime=crio: (8.412687342s)
--- PASS: TestNoKubernetes/serial/Start (8.41s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-969645 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-969645 "sudo systemctl is-active --quiet service kubelet": exit status 1 (289.420423ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.29s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.6s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.60s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.24s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-969645
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-969645: (1.240063064s)
--- PASS: TestNoKubernetes/serial/Stop (1.24s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (6.96s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-969645 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-969645 --driver=docker  --container-runtime=crio: (6.964590342s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (6.96s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.27s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-969645 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-969645 "sudo systemctl is-active --quiet service kubelet": exit status 1 (270.662114ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.27s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1.15s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.15s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.72s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:218: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-577066
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.72s)

                                                
                                    
x
+
TestPause/serial/Start (76.63s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-142925 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-142925 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (1m16.628144489s)
--- PASS: TestPause/serial/Start (76.63s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (4.56s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:230: (dbg) Run:  out/minikube-linux-arm64 start -p false-452504 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p false-452504 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (282.863493ms)

                                                
                                                
-- stdout --
	* [false-452504] minikube v1.30.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=16569
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/16569-2389/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/16569-2389/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0531 19:27:43.242606  142474 out.go:296] Setting OutFile to fd 1 ...
	I0531 19:27:43.242794  142474 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0531 19:27:43.242805  142474 out.go:309] Setting ErrFile to fd 2...
	I0531 19:27:43.242810  142474 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0531 19:27:43.242987  142474 root.go:336] Updating PATH: /home/jenkins/minikube-integration/16569-2389/.minikube/bin
	I0531 19:27:43.243469  142474 out.go:303] Setting JSON to false
	I0531 19:27:43.252612  142474 start.go:127] hostinfo: {"hostname":"ip-172-31-21-244","uptime":4209,"bootTime":1685557055,"procs":358,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1037-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0531 19:27:43.252710  142474 start.go:137] virtualization:  
	I0531 19:27:43.255378  142474 out.go:177] * [false-452504] minikube v1.30.1 on Ubuntu 20.04 (arm64)
	I0531 19:27:43.264859  142474 out.go:177]   - MINIKUBE_LOCATION=16569
	I0531 19:27:43.266778  142474 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0531 19:27:43.265786  142474 notify.go:220] Checking for updates...
	I0531 19:27:43.270617  142474 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/16569-2389/kubeconfig
	I0531 19:27:43.272990  142474 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/16569-2389/.minikube
	I0531 19:27:43.274938  142474 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0531 19:27:43.276958  142474 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0531 19:27:43.279316  142474 config.go:182] Loaded profile config "force-systemd-flag-124615": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.2
	I0531 19:27:43.279473  142474 driver.go:375] Setting default libvirt URI to qemu:///system
	I0531 19:27:43.315090  142474 docker.go:121] docker version: linux-24.0.2:Docker Engine - Community
	I0531 19:27:43.315188  142474 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0531 19:27:43.459588  142474 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:6 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:31 OomKillDisable:true NGoroutines:45 SystemTime:2023-05-31 19:27:43.448932101 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1037-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215166976 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.5] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.18.1]] Warnings:<nil>}}
	I0531 19:27:43.459693  142474 docker.go:294] overlay module found
	I0531 19:27:43.461985  142474 out.go:177] * Using the docker driver based on user configuration
	I0531 19:27:43.463820  142474 start.go:297] selected driver: docker
	I0531 19:27:43.463838  142474 start.go:875] validating driver "docker" against <nil>
	I0531 19:27:43.463851  142474 start.go:886] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0531 19:27:43.467344  142474 out.go:177] 
	W0531 19:27:43.469090  142474 out.go:239] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I0531 19:27:43.470952  142474 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:86: 
----------------------- debugLogs start: false-452504 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-452504

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-452504

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-452504

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-452504

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-452504

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-452504

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-452504

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-452504

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-452504

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-452504

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-452504" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-452504"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-452504" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-452504"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-452504" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-452504"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-452504

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-452504" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-452504"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-452504" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-452504"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-452504" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-452504" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-452504" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-452504" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-452504" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-452504" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-452504" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-452504" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-452504" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-452504"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-452504" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-452504"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-452504" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-452504"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-452504" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-452504"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-452504" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-452504"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-452504" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-452504" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-452504" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-452504" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-452504"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-452504" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-452504"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-452504" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-452504"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-452504" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-452504"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-452504" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-452504"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-452504

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-452504" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-452504"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-452504" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-452504"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-452504" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-452504"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-452504" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-452504"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-452504" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-452504"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-452504" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-452504"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-452504" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-452504"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-452504" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-452504"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-452504" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-452504"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-452504" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-452504"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-452504" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-452504"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-452504" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-452504"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-452504" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-452504"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-452504" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-452504"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-452504" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-452504"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-452504" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-452504"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-452504" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-452504"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-452504" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-452504"

                                                
                                                
----------------------- debugLogs end: false-452504 [took: 4.099369558s] --------------------------------
helpers_test.go:175: Cleaning up "false-452504" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p false-452504
--- PASS: TestNetworkPlugins/group/false (4.56s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (125.55s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-085809 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.16.0
E0531 19:30:18.521273    7804 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/functional-747104/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-085809 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.16.0: (2m5.545304263s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (125.55s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.56s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-085809 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [41452628-5582-4b11-ab28-4dbfa9804633] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [41452628-5582-4b11-ab28-4dbfa9804633] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.029557877s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-085809 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.56s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.85s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-085809 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-085809 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.85s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12.06s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-085809 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-085809 --alsologtostderr -v=3: (12.062941451s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.06s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-085809 -n old-k8s-version-085809
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-085809 -n old-k8s-version-085809: exit status 7 (76.155441ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-085809 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.17s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (459.19s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-085809 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.16.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-085809 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.16.0: (7m38.599008943s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-085809 -n old-k8s-version-085809
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (459.19s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (63.4s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-536753 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.27.2
E0531 19:33:21.564928    7804 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/functional-747104/client.crt: no such file or directory
E0531 19:33:31.867995    7804 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/ingress-addon-legacy-546551/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-536753 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.27.2: (1m3.400940631s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (63.40s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (8.48s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-536753 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [d9bdfb76-9b12-4123-9acf-c6c3c3f6b30e] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [d9bdfb76-9b12-4123-9acf-c6c3c3f6b30e] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 8.026737337s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-536753 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (8.48s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-536753 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-536753 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.13s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-536753 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-536753 --alsologtostderr -v=3: (12.128236854s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.13s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-536753 -n no-preload-536753
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-536753 -n no-preload-536753: exit status 7 (74.007451ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-536753 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.16s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (599.97s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-536753 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.27.2
E0531 19:35:18.520767    7804 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/functional-747104/client.crt: no such file or directory
E0531 19:36:34.910932    7804 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/ingress-addon-legacy-546551/client.crt: no such file or directory
E0531 19:37:05.254022    7804 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/addons-748280/client.crt: no such file or directory
E0531 19:38:31.867955    7804 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/ingress-addon-legacy-546551/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-536753 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.27.2: (9m59.577754593s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-536753 -n no-preload-536753
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (599.97s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.03s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-c6bcm" [65886bfc-682d-4908-a66e-b3e73aa4ac86] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.031041107s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.03s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-c6bcm" [65886bfc-682d-4908-a66e-b3e73aa4ac86] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.008420784s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-085809 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.38s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 ssh -p old-k8s-version-085809 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20220726-ed811e41
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.38s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (3.88s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-085809 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 pause -p old-k8s-version-085809 --alsologtostderr -v=1: (1.038377506s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-085809 -n old-k8s-version-085809
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-085809 -n old-k8s-version-085809: exit status 2 (434.604701ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-085809 -n old-k8s-version-085809
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-085809 -n old-k8s-version-085809: exit status 2 (401.652737ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p old-k8s-version-085809 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-085809 -n old-k8s-version-085809
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-085809 -n old-k8s-version-085809
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (3.88s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (87.7s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-224896 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.27.2
E0531 19:40:18.520532    7804 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/functional-747104/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-224896 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.27.2: (1m27.70059912s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (87.70s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.58s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-224896 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [0b1fce2e-92ab-44a5-b13b-b96ae66df7be] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [0b1fce2e-92ab-44a5-b13b-b96ae66df7be] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.028914231s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-224896 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.58s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-224896 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-224896 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-224896 --alsologtostderr -v=3
E0531 19:41:18.101746    7804 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/old-k8s-version-085809/client.crt: no such file or directory
E0531 19:41:18.107221    7804 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/old-k8s-version-085809/client.crt: no such file or directory
E0531 19:41:18.117589    7804 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/old-k8s-version-085809/client.crt: no such file or directory
E0531 19:41:18.137947    7804 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/old-k8s-version-085809/client.crt: no such file or directory
E0531 19:41:18.178236    7804 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/old-k8s-version-085809/client.crt: no such file or directory
E0531 19:41:18.258537    7804 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/old-k8s-version-085809/client.crt: no such file or directory
E0531 19:41:18.418947    7804 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/old-k8s-version-085809/client.crt: no such file or directory
E0531 19:41:18.739470    7804 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/old-k8s-version-085809/client.crt: no such file or directory
E0531 19:41:19.380415    7804 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/old-k8s-version-085809/client.crt: no such file or directory
E0531 19:41:20.660686    7804 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/old-k8s-version-085809/client.crt: no such file or directory
E0531 19:41:23.220847    7804 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/old-k8s-version-085809/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-224896 --alsologtostderr -v=3: (12.116835846s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-224896 -n embed-certs-224896
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-224896 -n embed-certs-224896: exit status 7 (70.16779ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-224896 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.16s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (623.57s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-224896 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.27.2
E0531 19:41:28.341403    7804 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/old-k8s-version-085809/client.crt: no such file or directory
E0531 19:41:38.582160    7804 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/old-k8s-version-085809/client.crt: no such file or directory
E0531 19:41:48.304311    7804 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/addons-748280/client.crt: no such file or directory
E0531 19:41:59.062361    7804 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/old-k8s-version-085809/client.crt: no such file or directory
E0531 19:42:05.253700    7804 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/addons-748280/client.crt: no such file or directory
E0531 19:42:40.023105    7804 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/old-k8s-version-085809/client.crt: no such file or directory
E0531 19:43:31.868508    7804 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/ingress-addon-legacy-546551/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-224896 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.27.2: (10m23.032723737s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-224896 -n embed-certs-224896
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (623.57s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (5.03s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-5c5cfc8747-577vf" [3a35d7bb-ed1c-444a-aef6-237de120c901] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.024464684s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (5.03s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.12s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-5c5cfc8747-577vf" [3a35d7bb-ed1c-444a-aef6-237de120c901] Running
E0531 19:44:01.943815    7804 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/old-k8s-version-085809/client.crt: no such file or directory
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.007341133s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-536753 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.33s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 ssh -p no-preload-536753 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.33s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.39s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-536753 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-536753 -n no-preload-536753
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-536753 -n no-preload-536753: exit status 2 (364.066082ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-536753 -n no-preload-536753
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-536753 -n no-preload-536753: exit status 2 (333.40646ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p no-preload-536753 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-536753 -n no-preload-536753
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-536753 -n no-preload-536753
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.39s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (78.53s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-978602 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.27.2
E0531 19:45:18.520470    7804 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/functional-747104/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-978602 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.27.2: (1m18.532490292s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (78.53s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.49s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-978602 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [06b9d737-5274-42e8-b69b-6617963e6678] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [06b9d737-5274-42e8-b69b-6617963e6678] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 10.025167171s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-978602 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.49s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.03s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-978602 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-978602 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.03s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-978602 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-978602 --alsologtostderr -v=3: (12.109098455s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.11s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-978602 -n default-k8s-diff-port-978602
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-978602 -n default-k8s-diff-port-978602: exit status 7 (87.704815ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-978602 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (626.51s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-978602 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.27.2
E0531 19:46:18.101785    7804 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/old-k8s-version-085809/client.crt: no such file or directory
E0531 19:46:45.783980    7804 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/old-k8s-version-085809/client.crt: no such file or directory
E0531 19:47:05.253607    7804 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/addons-748280/client.crt: no such file or directory
E0531 19:48:31.868181    7804 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/ingress-addon-legacy-546551/client.crt: no such file or directory
E0531 19:48:32.873590    7804 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/no-preload-536753/client.crt: no such file or directory
E0531 19:48:32.878884    7804 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/no-preload-536753/client.crt: no such file or directory
E0531 19:48:32.889188    7804 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/no-preload-536753/client.crt: no such file or directory
E0531 19:48:32.909510    7804 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/no-preload-536753/client.crt: no such file or directory
E0531 19:48:32.949809    7804 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/no-preload-536753/client.crt: no such file or directory
E0531 19:48:33.030088    7804 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/no-preload-536753/client.crt: no such file or directory
E0531 19:48:33.190515    7804 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/no-preload-536753/client.crt: no such file or directory
E0531 19:48:33.511122    7804 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/no-preload-536753/client.crt: no such file or directory
E0531 19:48:34.151365    7804 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/no-preload-536753/client.crt: no such file or directory
E0531 19:48:35.431932    7804 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/no-preload-536753/client.crt: no such file or directory
E0531 19:48:37.992114    7804 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/no-preload-536753/client.crt: no such file or directory
E0531 19:48:43.112270    7804 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/no-preload-536753/client.crt: no such file or directory
E0531 19:48:53.353278    7804 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/no-preload-536753/client.crt: no such file or directory
E0531 19:49:13.834429    7804 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/no-preload-536753/client.crt: no such file or directory
E0531 19:49:54.795013    7804 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/no-preload-536753/client.crt: no such file or directory
E0531 19:50:01.565970    7804 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/functional-747104/client.crt: no such file or directory
E0531 19:50:18.520853    7804 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/functional-747104/client.crt: no such file or directory
E0531 19:51:16.715433    7804 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/no-preload-536753/client.crt: no such file or directory
E0531 19:51:18.102292    7804 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/old-k8s-version-085809/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-978602 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.27.2: (10m26.128273325s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-978602 -n default-k8s-diff-port-978602
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (626.51s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (5.03s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-5c5cfc8747-btdlj" [9c73e385-105e-4571-a529-21c8d9f6a022] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.028625466s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (5.03s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-5c5cfc8747-btdlj" [9c73e385-105e-4571-a529-21c8d9f6a022] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.008394392s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-224896 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.37s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 ssh -p embed-certs-224896 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.37s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.32s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-224896 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-224896 -n embed-certs-224896
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-224896 -n embed-certs-224896: exit status 2 (353.259914ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-224896 -n embed-certs-224896
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-224896 -n embed-certs-224896: exit status 2 (369.385869ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p embed-certs-224896 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-224896 -n embed-certs-224896
E0531 19:52:05.253489    7804 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/addons-748280/client.crt: no such file or directory
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-224896 -n embed-certs-224896
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.32s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (46.07s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-232130 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.27.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-232130 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.27.2: (46.070742423s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (46.07s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.96s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-232130 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.96s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.26s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-232130 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-232130 --alsologtostderr -v=3: (1.258727735s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.26s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-232130 -n newest-cni-232130
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-232130 -n newest-cni-232130: exit status 7 (74.573423ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-232130 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.16s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (31.96s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-232130 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.27.2
E0531 19:53:14.911055    7804 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/ingress-addon-legacy-546551/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-232130 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.27.2: (31.582328958s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-232130 -n newest-cni-232130
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (31.96s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.37s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 ssh -p newest-cni-232130 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.37s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.15s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-232130 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-232130 -n newest-cni-232130
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-232130 -n newest-cni-232130: exit status 2 (354.351024ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-232130 -n newest-cni-232130
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-232130 -n newest-cni-232130: exit status 2 (345.811654ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p newest-cni-232130 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-232130 -n newest-cni-232130
E0531 19:53:31.867679    7804 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/ingress-addon-legacy-546551/client.crt: no such file or directory
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-232130 -n newest-cni-232130
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (79.85s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:111: (dbg) Run:  out/minikube-linux-arm64 start -p auto-452504 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
E0531 19:54:00.556081    7804 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/no-preload-536753/client.crt: no such file or directory
net_test.go:111: (dbg) Done: out/minikube-linux-arm64 start -p auto-452504 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (1m19.850221463s)
--- PASS: TestNetworkPlugins/group/auto/Start (79.85s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-452504 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (9.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context auto-452504 replace --force -f testdata/netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-7458db8b8-p9jg2" [3c3dd4e8-4a7d-4640-a864-eec4d0f56ce3] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-7458db8b8-p9jg2" [3c3dd4e8-4a7d-4640-a864-eec4d0f56ce3] Running
net_test.go:162: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 9.007255813s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (9.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:174: (dbg) Run:  kubectl --context auto-452504 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:193: (dbg) Run:  kubectl --context auto-452504 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:248: (dbg) Run:  kubectl --context auto-452504 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (77.5s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:111: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-452504 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
E0531 19:56:18.102328    7804 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/old-k8s-version-085809/client.crt: no such file or directory
net_test.go:111: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-452504 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (1m17.501892561s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (77.50s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (5.03s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-5c5cfc8747-njg2b" [588b41f7-17b6-40d0-a073-d6abd66d2730] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.025852131s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (5.03s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-5c5cfc8747-njg2b" [588b41f7-17b6-40d0-a073-d6abd66d2730] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.00734922s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-978602 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.33s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 ssh -p default-k8s-diff-port-978602 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.33s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.28s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-978602 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-978602 -n default-k8s-diff-port-978602
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-978602 -n default-k8s-diff-port-978602: exit status 2 (349.060172ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-978602 -n default-k8s-diff-port-978602
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-978602 -n default-k8s-diff-port-978602: exit status 2 (347.639272ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p default-k8s-diff-port-978602 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-978602 -n default-k8s-diff-port-978602
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-978602 -n default-k8s-diff-port-978602
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.28s)
E0531 20:01:17.507811    7804 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/auto-452504/client.crt: no such file or directory
E0531 20:01:18.101750    7804 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/old-k8s-version-085809/client.crt: no such file or directory
E0531 20:01:43.340775    7804 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/kindnet-452504/client.crt: no such file or directory
E0531 20:01:43.346102    7804 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/kindnet-452504/client.crt: no such file or directory
E0531 20:01:43.356437    7804 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/kindnet-452504/client.crt: no such file or directory
E0531 20:01:43.376639    7804 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/kindnet-452504/client.crt: no such file or directory
E0531 20:01:43.416926    7804 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/kindnet-452504/client.crt: no such file or directory
E0531 20:01:43.497245    7804 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/kindnet-452504/client.crt: no such file or directory
E0531 20:01:43.657679    7804 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/kindnet-452504/client.crt: no such file or directory
E0531 20:01:43.978026    7804 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/kindnet-452504/client.crt: no such file or directory
E0531 20:01:44.618946    7804 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/kindnet-452504/client.crt: no such file or directory
E0531 20:01:45.899663    7804 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/kindnet-452504/client.crt: no such file or directory
E0531 20:01:48.460088    7804 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/kindnet-452504/client.crt: no such file or directory
E0531 20:01:52.664375    7804 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/default-k8s-diff-port-978602/client.crt: no such file or directory
E0531 20:01:53.581094    7804 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/kindnet-452504/client.crt: no such file or directory
E0531 20:02:03.821867    7804 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/kindnet-452504/client.crt: no such file or directory
E0531 20:02:05.253305    7804 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/addons-748280/client.crt: no such file or directory

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (71.81s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:111: (dbg) Run:  out/minikube-linux-arm64 start -p calico-452504 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
net_test.go:111: (dbg) Done: out/minikube-linux-arm64 start -p calico-452504 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (1m11.812165558s)
--- PASS: TestNetworkPlugins/group/calico/Start (71.81s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (5.04s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:119: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-4r5tl" [d611203c-269c-47cb-9a8d-6f695dbb2514] Running
net_test.go:119: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 5.038555472s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (5.04s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-452504 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (13.56s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context kindnet-452504 replace --force -f testdata/netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-7458db8b8-wqblm" [ca0cfe3e-8a17-498a-82a9-0ba5aa0a2368] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-7458db8b8-wqblm" [ca0cfe3e-8a17-498a-82a9-0ba5aa0a2368] Running
net_test.go:162: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 13.00855681s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (13.56s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:174: (dbg) Run:  kubectl --context kindnet-452504 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:193: (dbg) Run:  kubectl --context kindnet-452504 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:248: (dbg) Run:  kubectl --context kindnet-452504 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (71.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:111: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-452504 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
E0531 19:57:41.144353    7804 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/old-k8s-version-085809/client.crt: no such file or directory
net_test.go:111: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-452504 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (1m11.182258318s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (71.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (5.06s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:119: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-swcnm" [65778f27-422e-4b61-b44d-83c210bccd62] Running
net_test.go:119: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 5.060204915s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (5.06s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.49s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-452504 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.49s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (13.71s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context calico-452504 replace --force -f testdata/netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-7458db8b8-m4jgb" [ef6787e0-fb52-4e93-bd1f-527fdfb179b1] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-7458db8b8-m4jgb" [ef6787e0-fb52-4e93-bd1f-527fdfb179b1] Running
net_test.go:162: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 13.027001493s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (13.71s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:174: (dbg) Run:  kubectl --context calico-452504 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:193: (dbg) Run:  kubectl --context calico-452504 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:248: (dbg) Run:  kubectl --context calico-452504 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (91.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:111: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-452504 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
net_test.go:111: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-452504 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (1m31.265308375s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (91.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-452504 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (11.5s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context custom-flannel-452504 replace --force -f testdata/netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-7458db8b8-69t7k" [4f402813-b320-418b-92f8-3da9a34673ef] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-7458db8b8-69t7k" [4f402813-b320-418b-92f8-3da9a34673ef] Running
net_test.go:162: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 11.009381375s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (11.50s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:174: (dbg) Run:  kubectl --context custom-flannel-452504 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:193: (dbg) Run:  kubectl --context custom-flannel-452504 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:248: (dbg) Run:  kubectl --context custom-flannel-452504 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (71.68s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:111: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-452504 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
E0531 19:59:55.586188    7804 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/auto-452504/client.crt: no such file or directory
E0531 19:59:55.591459    7804 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/auto-452504/client.crt: no such file or directory
E0531 19:59:55.601732    7804 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/auto-452504/client.crt: no such file or directory
E0531 19:59:55.621974    7804 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/auto-452504/client.crt: no such file or directory
E0531 19:59:55.662293    7804 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/auto-452504/client.crt: no such file or directory
E0531 19:59:55.742560    7804 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/auto-452504/client.crt: no such file or directory
E0531 19:59:55.902896    7804 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/auto-452504/client.crt: no such file or directory
E0531 19:59:56.223603    7804 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/auto-452504/client.crt: no such file or directory
E0531 19:59:56.864475    7804 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/auto-452504/client.crt: no such file or directory
E0531 19:59:58.145612    7804 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/auto-452504/client.crt: no such file or directory
E0531 20:00:00.706764    7804 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/auto-452504/client.crt: no such file or directory
net_test.go:111: (dbg) Done: out/minikube-linux-arm64 start -p flannel-452504 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (1m11.684718425s)
--- PASS: TestNetworkPlugins/group/flannel/Start (71.68s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-452504 "pgrep -a kubelet"
E0531 20:00:05.827382    7804 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/auto-452504/client.crt: no such file or directory
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.62s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context enable-default-cni-452504 replace --force -f testdata/netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-7458db8b8-n6vw4" [204b5810-785b-48de-99bb-293cbc165dee] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-7458db8b8-n6vw4" [204b5810-785b-48de-99bb-293cbc165dee] Running
E0531 20:00:16.067571    7804 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/auto-452504/client.crt: no such file or directory
E0531 20:00:18.521097    7804 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/functional-747104/client.crt: no such file or directory
net_test.go:162: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 12.011682855s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.62s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:174: (dbg) Run:  kubectl --context enable-default-cni-452504 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:193: (dbg) Run:  kubectl --context enable-default-cni-452504 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:248: (dbg) Run:  kubectl --context enable-default-cni-452504 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (5.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:119: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-5st5h" [8849907c-87be-4239-9580-e7e3f5a30f66] Running
E0531 20:00:30.695459    7804 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/default-k8s-diff-port-978602/client.crt: no such file or directory
E0531 20:00:30.700871    7804 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/default-k8s-diff-port-978602/client.crt: no such file or directory
E0531 20:00:30.711177    7804 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/default-k8s-diff-port-978602/client.crt: no such file or directory
E0531 20:00:30.731494    7804 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/default-k8s-diff-port-978602/client.crt: no such file or directory
E0531 20:00:30.772288    7804 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/default-k8s-diff-port-978602/client.crt: no such file or directory
E0531 20:00:30.853087    7804 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/default-k8s-diff-port-978602/client.crt: no such file or directory
E0531 20:00:31.013430    7804 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/default-k8s-diff-port-978602/client.crt: no such file or directory
E0531 20:00:31.333991    7804 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/default-k8s-diff-port-978602/client.crt: no such file or directory
E0531 20:00:31.974791    7804 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/default-k8s-diff-port-978602/client.crt: no such file or directory
E0531 20:00:33.255461    7804 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/default-k8s-diff-port-978602/client.crt: no such file or directory
net_test.go:119: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 5.030319781s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (5.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-452504 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (12.52s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context flannel-452504 replace --force -f testdata/netcat-deployment.yaml
E0531 20:00:35.816294    7804 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/default-k8s-diff-port-978602/client.crt: no such file or directory
net_test.go:162: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-7458db8b8-4wmzz" [4907803f-ea4d-4c53-868d-9cf2fb0e6b95] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0531 20:00:36.547680    7804 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/auto-452504/client.crt: no such file or directory
helpers_test.go:344: "netcat-7458db8b8-4wmzz" [4907803f-ea4d-4c53-868d-9cf2fb0e6b95] Running
net_test.go:162: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 12.014226886s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (12.52s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (92.74s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:111: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-452504 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
net_test.go:111: (dbg) Done: out/minikube-linux-arm64 start -p bridge-452504 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (1m32.736987786s)
--- PASS: TestNetworkPlugins/group/bridge/Start (92.74s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:174: (dbg) Run:  kubectl --context flannel-452504 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:193: (dbg) Run:  kubectl --context flannel-452504 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:248: (dbg) Run:  kubectl --context flannel-452504 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-452504 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (10.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context bridge-452504 replace --force -f testdata/netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-7458db8b8-8k6s7" [76585c82-e599-45a2-b474-53092232ec00] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-7458db8b8-8k6s7" [76585c82-e599-45a2-b474-53092232ec00] Running
E0531 20:02:24.302094    7804 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-2389/.minikube/profiles/kindnet-452504/client.crt: no such file or directory
net_test.go:162: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.007275163s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (10.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:174: (dbg) Run:  kubectl --context bridge-452504 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:193: (dbg) Run:  kubectl --context bridge-452504 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:248: (dbg) Run:  kubectl --context bridge-452504 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.19s)

                                                
                                    

Test skip (28/296)

x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:136: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
aaa_download_only_test.go:152: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.2/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.27.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.2/binaries
aaa_download_only_test.go:136: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.27.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.2/kubectl
aaa_download_only_test.go:152: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.27.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.66s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-298073 --alsologtostderr --driver=docker  --container-runtime=crio
aaa_download_only_test.go:234: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-298073" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-298073
--- SKIP: TestDownloadOnlyKic (0.66s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (0s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:420: skip Helm test on arm64
--- SKIP: TestAddons/parallel/HelmTiller (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:474: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:35: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:45: Skip if arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1782: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:458: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:545: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-250489" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-250489
--- SKIP: TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (4.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:92: Skipping the test as crio container runtimes requires CNI
panic.go:522: 
----------------------- debugLogs start: kubenet-452504 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-452504

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-452504

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-452504

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-452504

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-452504

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-452504

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-452504

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-452504

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-452504

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-452504

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-452504" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-452504"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-452504" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-452504"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-452504" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-452504"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-452504

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-452504" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-452504"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-452504" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-452504"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-452504" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-452504" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-452504" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-452504" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-452504" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-452504" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-452504" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-452504" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-452504" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-452504"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-452504" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-452504"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-452504" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-452504"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-452504" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-452504"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-452504" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-452504"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-452504" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-452504" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-452504" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-452504" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-452504"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-452504" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-452504"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-452504" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-452504"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-452504" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-452504"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-452504" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-452504"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-452504

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-452504" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-452504"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-452504" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-452504"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-452504" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-452504"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-452504" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-452504"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-452504" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-452504"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-452504" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-452504"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-452504" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-452504"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-452504" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-452504"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-452504" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-452504"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-452504" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-452504"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-452504" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-452504"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-452504" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-452504"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-452504" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-452504"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-452504" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-452504"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-452504" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-452504"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-452504" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-452504"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-452504" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-452504"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-452504" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-452504"

                                                
                                                
----------------------- debugLogs end: kubenet-452504 [took: 4.205814705s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-452504" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubenet-452504
--- SKIP: TestNetworkPlugins/group/kubenet (4.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (4.91s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:101: Skipping the test as it's interfering with other tests and is outdated
panic.go:522: 
----------------------- debugLogs start: cilium-452504 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-452504

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-452504

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-452504

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-452504

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-452504

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-452504

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-452504

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-452504

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-452504

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-452504

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-452504" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-452504"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-452504" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-452504"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-452504" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-452504"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-452504

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-452504" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-452504"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-452504" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-452504"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-452504" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-452504" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-452504" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-452504" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-452504" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-452504" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-452504" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-452504" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-452504" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-452504"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-452504" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-452504"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-452504" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-452504"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-452504" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-452504"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-452504" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-452504"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-452504

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-452504

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-452504" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-452504" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-452504

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-452504

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-452504" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-452504" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-452504" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-452504" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-452504" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-452504" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-452504"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-452504" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-452504"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-452504" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-452504"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-452504" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-452504"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-452504" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-452504"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-452504

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-452504" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-452504"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-452504" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-452504"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-452504" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-452504"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-452504" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-452504"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-452504" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-452504"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-452504" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-452504"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-452504" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-452504"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-452504" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-452504"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-452504" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-452504"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-452504" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-452504"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-452504" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-452504"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-452504" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-452504"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-452504" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-452504"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-452504" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-452504"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-452504" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-452504"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-452504" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-452504"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-452504" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-452504"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-452504" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-452504"

                                                
                                                
----------------------- debugLogs end: cilium-452504 [took: 4.644605046s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-452504" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-452504
--- SKIP: TestNetworkPlugins/group/cilium (4.91s)

                                                
                                    
Copied to clipboard